Submitted by BreadManToast t3_ytl8m2 in singularity
Rezeno56 t1_iw5beg6 wrote
I wonder how AI in 2022 is compared to in 2017 and to 2012?
visarga t1_iw6cj8l wrote
That's easy.
Neural nets before 2012 were small, weak and hard to train. But in 2012 we got a sudden jump in accuracy by 10% in image classification. In the next 2 years all ML researchers switched to neural nets and all papers were about them. This period lasted 5 years in total and scaled models from the size of an "ant" to that of a "human". Almost all fundamentals of neural nets were discovered during this time.
But in 2017 we got the transformer, this led to unprecedented scaling jumps, from the size of a "human" to that of a "city". By 2020 we had GPT-3 and today, just 5 years later from transformer, we have multiple generalist models.
On a separate arc, reinforcement learning, we got the first breakthroughs in 2013 with Deep Q-Learning from DeepMind on Atari games and by 2015 we had AlphaGo. Learning from self play has been proven to be amazing. There is cross pollination between large language models and RL. Robots with GPT-3 strapped on top can do amazing things. GPT-3 trained in self-play like AlphaGo can improve its ability to solve problems. It can already solve competition level problems in math and code.
The next obvious step is a massive video model, both for video generation and for learning procedural knowledge - how to do things step by step. YouTube and other platforms are full of video, which is a multi-modal format of image, audio, voice and text captions. I expect these models to revolutionise robotics and desktop assistants (RPA), besides media generation.
callidoradesigns t1_iw5dph3 wrote
I wonder what AI 2023 will be …
eddieguy t1_iw83bmm wrote
/imagine prompt: AI in 2023
Viewing a single comment thread. View all comments