Viewing a single comment thread. View all comments

turnip_burrito t1_j9wwo5t wrote

Exponential growth of AI capability isn't a law of nature. It's only obvious in hindsight and depends on a lot of little things and a nice conducive R&D environment. We're not guaranteed to follow any exponentials.

Some people on this sub are going to be disappointed when we don't have AGI in 5 or 10 years. Or maybe they'll have forgotten that they predicted AGI by 2030 by the time 2030 actually rolls around.

13

Ezekiel_W t1_j9xeqrn wrote

We will most certainly have AGI before the decade ends.

14

kaityl3 t1_j9xu9li wrote

If not much sooner. It was only in mid-2020 when GPT-3 was released. Look how far the field has come even in those less than 3 years.

5

visarga t1_ja57ahr wrote

Yes, we got far. But why did we get here?

  1. We had a "wild" GPT3 in 2020, it would hardly take instructions, but still the largest leap in capability ever seen

  2. Then they figured out that training the model in a mix of many tasks will unlock general following ability. That was the instruct series.

  3. But still, it was hard to make the model "behave". It was not aligned with us. So why did we get another miracle here? Reinforcement Learning has almost nothing to do with NLP, but here we have RLHF the crown jewel of the GPT series. With it we got chatGPT and BingChat.

None of these three moments were guaranteed based on what we knew at the time. They are improbable things. Language models did nothing of the sort before 2020. They were factories of word salad. They could barely write two lines of coherent English.

What I want to say is that we see no reason these miracles have to happen so fast in succession. We can't rely on their consistent return.

What we can rely on is the parts we can extrapolate now. We think we will see models at least 10x larger than GPT3 and trained on much more data. We know how to make models 10x more efficient. We think language models will improve a lot when combined with other modules like search, Python code execution, calculator, calendar and database, we're not even at 10% there with the external resources. We think integrating vision, audio, actions and other modalities will have a huge impact, and we're just starting. LLMs are still pure text.

I think we can expect 10x...1000x boost just based on what we know right now.

1

CrazyC787 t1_j9xrtt0 wrote

Yeah, it's funny seeing people here who obviously don't know much of what they're talking about take vague guesses at agi being within the decade.

4

Baturinsky t1_j9y275n wrote

That depends on if there is some new revolutionary breakthrough. They are hard to predict. But considering how many people will research the field, they are quite likely.

1

madali0 t1_j9ytpgi wrote

I agree with you. I was reading about ELIZA, populary considered the first AI bot in 1965 or something and you can google it and try it out. It's obviously very basic by today's standards but apparently, people who tried it back then considered it very human.

If reddit was available then, this group would be shitting their pants that AGI would be coming around 1970 or 1980 by the latest.

It's possible that in 50 years, we'd be as closer and chatgpt will be look as ancient as eliza, but we'd still won't be near. Also, future people will look at us as excited caveman thinking chatgpt in any way resembles intelligence the same way eliza obvi doesn't to me.

1