Smallpaul

Smallpaul t1_jegi080 wrote

They wouldn’t do it in-house. They would fund some kind of coalition.

Also: it’s been proven that you can use one AI to train another so you can bootstrap more cheaply than starting from scratch. Lots of relevant open source out there.

A huge part of the problem is just having enough cash to rent GPUs in any case. Not necessarily deep technical problems.

Also, as I said above, it doesn’t have to be competitive. It doesn’t have to be a product they sell. It could be a tool they themselves use to run the UK government without sending citizen data to a black box in America.

11

Smallpaul t1_jec8qy8 wrote

Your mental model seems to be that there will be a bunch of roughly equivalent models out there with different values, and they can compete with each other to prevent any one value system from overwhelming.

I think it is much more likely that there will exist one, single lab, where the singularity and escape will happen. Having more such labs is like having a virus research lab in every city of every country. And like open sourcing the DNA for a super-virus.

3

Smallpaul t1_jea4whk wrote

>You get a zero cost tutor that may or may not be correct about something objective, and as a student you are supposed to trust that?

No. I did not say to trust that.

Also: if you think that real teachers never make mistakes, you're incorrect yourself. My kids have textbooks full of errata. Even Donald Knuth issues corrections for his books (rarely).

>I also pay, well my company does, to access GPT-4 and it's still not that close to being a reliable tutor. I wouldn't tell my juniors to ask ChatGPT about issues they are having instead of asking me or another of the seniors or lead engineer.

Then you are asking them to waste time.

I am "junior" on a particular language and I wasted a bunch of time on a problem because I don't want to bug the more experience person every time I have a problem.

The situation actually happened twice in one day.

The first time, I wasted 30 minutes trying to interpret an extremely obscure error message, then asked my colleague, then kicked myself because I had run into the same problem six months ago.

Then I asked ChatGPT4, and it gave me six possible causes. Which included the one that I had seen before. Had I asked GPT4, I would have saved myself 30 minutes and saved my colleague an interruption.

The second time, I asked ChatGPT4 directly. It gave me 5 possible causes. Using process of elimination I immediately knew which it was. Saved me trying to figure it out for myself before interrupting someone else.

You are teaching your juniors to be helpless instead of teaching them how to use tools appropriately.

> Code working is not equivocal to the code being written correctly or well. If you're the kind of engineer that just think "oh well it works at least, that's good enough" then you're the kind of engineer who will be replaced by AI tooling in the near future.

One of the ways you can use this tool is to ask it how to make the code more reliable, easier to read, etc.

If you use the tool appropriately, it can help with that too.

0

Smallpaul t1_je9e0am wrote

Note: although I have learned many things from ChatGPT, I have not learned a whole language. I haven't run that experiment yet.

ChatGPT is usually good at distilling common wisdom, i.e. professional standards. It has read hundreds of blogs and can summarize "both sides" of any issue which is controversial, or give you best practices when the question is not.

If the question is whether the information it gives you is factually correct, you will need your discernment to decide whether the thing you are learning is trivially verifiable ("does the code run") or more subtle, in which case you might verify with Google.

In exchange for this vigilance, you get a zero-cost tutor that answers questions immediately, and can take you down a personalized learning path.

It might end up being more trouble than it is worth, but it might also depend on the optimal learning style of the student.

I use GPT-4, and there are far fewer hallucinations.

4

Smallpaul t1_jdw0vx9 wrote

It seems to me that if a researcher uses OpenAI to generate an open source Instruct dataset, and a different corporation takes that dataset and uses it commercially, they are both legally in the clear unless they collude. The entity that is legally in contact with OpenAI has a legitimately non-commercial purpose and the entity doing the commercial work has no relationship with OpenAI.

2