Comments

You must log in or register to comment.

Foundation12a OP t1_j24r5bq wrote

Exponential growth is something many do not understand, so many times I have seen people in this sub reddit talk about 10 or 20 years from now as if they are at all predictable when the sum total of all progress in AI will be a small percentage of what we achieve next year.

31

Tip_Odde t1_j24ru9n wrote

I havent seen anyone

​

ANYONE

​

claim it was going to slow down next year lol

103

Tip_Odde t1_j24te5i wrote

Based on the url I can see its a single message not a thread soooooooooooooooo

​

no one has claimed this. Should have just said "2023 is gonna be even bigger" or something and presented your view in a positive way instead of trying to negate others.

11

Foundation12a OP t1_j24txwp wrote

That is literally someone claiming that:

"Idk why, but I have a feeling AI will continue to progress, but not at the rate it did this year; the bar was set high in 2022."

All you said was you haven't see anyone claim it, well there is someone doing exactly that. Instead of reading the url, you maybe should have read the comment?

5

Belostoma t1_j24vbe8 wrote

Exponential growth in anything is rarely sustained indefinitely. It comes in bursts.

I expect the tech behind ChatGPT to start to hit a ceiling before too long. Its job is basically to coherently summarize its training data relevant to the prompt, and it's already super impressive on prompts for which adequate training data exist. It will cause disruptive changes in parts of society built around the assumption that a human wrote something. 2023 will probably bring cooler art and more believable writing as things like ChatGPT and Dall-E are refined.

However, there isn't really a smooth path for incremental improvement from this to tasks that require understanding and thought, making logical deductions from extremely sparse training data with an understanding of their credibility and connections, and solving novel problems. I'm not saying AI won't crack that nut eventually, but it's a different and harder problem that will require new breakthroughs rather than incremental improvements.

I expect exponential growth in that area whenever AI gets good enough to really help AI researchers make the next breakthroughs and start a positive feedback loop of recursive self-improvement. But it's not clear that ChatGPT is the start of that cycle. Humans might leverage it to gain some efficiency in their work, but that's more of a linear improvement than exponential.

2

Foundation12a OP t1_j24xxm9 wrote

AI models are not progressing at the rate of smart phones they are progressing at much greater pace than that yet to read the more conservative opinions on their progress you'd assume that AI developments were made at timeframes that would be equal to say home video game consoles being released. Things that would require decades of progress in other fields occur within weeks or months in the field of AI development.

Imagine showing Dalle-2's image generation to someone in 2014. Imagine trying to explain what Google Pathways can achieve to someone who had only used Cleverbot before. The gulf is staggering and the rate of progress in 2022 eclipses that of any previous year outright in the history of the entire field of AI development and all of that existed in May.

In the first week of June we had:

https://sites.google.com/view/multi-game-transformers

https://www.nature.com/articles/s41467-022-30761-2

https://techxplore.com/news/2022-06-artificial-skin-robots.html

Then later in that month there would be:

https://www.eurekalert.org/news-releases/955133

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

https://twitter.com/i/status/1536378529415315458

https://gweb-research-parti.web.app/parti_paper.pdf

https://techxplore.com/news/2022-06-deep-framework-pose-robotic-arms.html

AI generated podcasts: https://lexman.rocks/

AI learned how to play Minecraft: https://www.techradar.com/news/ai-can-now-play-minecraft-just-as-well-as-you-heres-why-that-matters

Then GODEL:

https://www.marktechpost.com/2022/06/25/microsoft-ai-researchers-open-source-godel-a-large-scale-pre-trained-language-model-for-dialog/

A language model capable of solving mathematical questions using step-by-step natural language reasoning combining scale, data and others dramatically improves performance on the STEM benchmarks MATH and MMLU-STEM. https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html

https://techxplore.com/news/2022-06-fake-robots-ropes-faster.html

And that was only up to the end of June.

Do not take these sources for it either Jack Clark sums it up well:

https://twitter.com/jackclarkSF/status/1542715805657210881?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1542715805657210881%7Ctwgr%5E%7Ctwcon%5Es1_&ref_url=

15

SurroundSwimming3494 t1_j24yisp wrote

>the rate of progress in 2022 eclipses that of any previous year outright in the history of the entire field of AI development

Perhaps, but you said in the previous comment that it eclipsed every single previous year combined, if I'm not mistaken. That's why I gave the answer that I gave.

9

Foundation12a OP t1_j24zcek wrote

There has been exponentially more progress each and every year, it's exponential growth not linear which is why based solely on what was achieved by June let alone after it puts 2022 in a league of it's own when it comes to AI advances.

5

manOnPavementWaving t1_j25doir wrote

Building on YEARS of ideas. They were cool, but without transformers they wouldn't exist. Without infrastructure code, they wouldn't exist. Without years of hardware improvements, they wouldn't exist. Without the ideas of normalization and skip connections, they wouldn't exist. Etc. (and this isn't even including all the alleys that were chased down, to find out they didn't work. Which isn't as clear, but definitely contributes to research).

GATO didn't even have that much to show for it, the long hoped-for skill transfer was not really there. DALLE 2 builds on CLIP and diffusion, ChatGPT builds on GPT3 and years of RL research.

You're saying something along the lines of "x is better than what came before, so the step to x is bigger than the sum of all the steps before that" and that is the worst take i've ever heard. It's definitely not how research works.

And goddamn it I'm getting deja vu cuz this bad take has been said before on this subreddit.

This rebuttal better? I'd be happy to go and list essential moments in AI in the past decade if it isn't.

7

Utoko t1_j26gb5n wrote

Right now the claim is that the current algorithms carry us way further.

The limiting factor is indeed computing power and good data. The last shift is that you need to grow the labeling of good tokens(desired outcomes) with the data.

There is also a lot of work done on chips, which are only used to train AI models and good data is created on mass. It is all coming together fast because there are also using models for data cleanup labeling.

Of course it is always possible that we hit a wall with the current algorithms soon but it looks very promising.
and the knowledge is walled off. The general principles of how OpenAI and Google archived their level are out there. So we have many companies driving us forward.

2

GeneralZain t1_j26jxb4 wrote

Idk if Chat-GPT should be on this? it released in 2022...

4

CypherLH t1_j26kj86 wrote

One could argue that we DID see more progress in 2022 than in the previous 10 years IF you just consider the subjective capabilities/functions added in 2022 that literally didn't exist previously. Nothing remotely close to Dalle 2 existed prior to 2022, and we now have multiple rapidly improving image generation models alongside Dalle 2. Same for large language models, GPT "3.5" is just a massive improvement over anything previously publicly available, including previous GPT-3 releases. chatGPT is just a further evolution on that "GPT-3.5" line of LLM's.

I get that these new subjective capabilities came as a result of just iterative improvements on develops ongoing since 2011/2012...but again if you just look at subjective capabilities....2022 saw MASSIVE gains.

6

Danger-Dom t1_j26n58e wrote

Who's the Twitter user? Is he someone important?

0

Ne_Nel t1_j27uvl5 wrote

Basically, memory tech. CPU has improved thousands last decades, but DRAM only 30x in the same period. And train needs tons of it. We are working on several ideas to walk around, but the exponential grow haven't a clean path at all for now.

1

Plus-Recording-8370 t1_j287ix8 wrote

Naive would be to think the government can do anything about it. Any obstacle put in ai's way will literally only make it stronger and undermine the petty human law givers even further. What we need is to find ways to adjust our societies to it, not the other way around.

6

Plus-Recording-8370 t1_j2886dh wrote

One important thing to note is that ai progress isn't yet bottlenecked by companies trying to regulate the market. Like with smartphones and consoles.while technologically it might be possible, SONY can't release a new playstation every month, because it wouldn't make any sense to do so.

1

shmoculus t1_j28yfux wrote

I kind of see what you are getting at, and it could be the case with exponential improvements in methods/research that we see more discoveries in one year than all the previous at some point but I don't think we're there yet.

The progression has been linear in my view:

  1. Efficient image classification (CNNs)

  2. object detection / segmentation / pix2pix / basic img2text models (RCNNs, Unet, GANs)

  3. Deep reinforcement learning (DQN, PPO, MCTS)

  4. Attention networks (transformers and language modelling)

  5. Basic question / answer and reasoning models

  6. Low quality txt2img models (e.g. DALL-E 1)

  7. High quality txt2img models (e.g. DALL-E 2, stable diffusion)

  8. Multimodal modals (image understading etc) <- we are here

  9. Already happening video2video models, text2mesh / point cloud

  10. Expect low, then high quality multimodal generation models e.g. txt2video + music

  11. Expect improved text understanding, general chat behaviour, ie large step ups in chatbot usefulness inclution ability to take actions (this part is already underway)

  12. Expect some kind of attention based method for reading and writing to storage (i.e memory) and possibly online learning / continuous improvement

13 . More incrementally interesting stuff :)

5

CypherLH t1_j2b0ncx wrote

"Linear" but consider how rapidly the last half of your points progressed! It took nearly a decade to go from step 1 to step 6. In then took 18 months to go from step 6 to step 9, and probably less than another 12 months to get to step 11 based on current rates of progress.

1