Viewing a single comment thread. View all comments

genshiryoku t1_j6a85jx wrote

Because Moore's Law largely stopped around ~2005 when Dennard Scaling stopped being a thing. Meaning clockspeeds have hovered around the 4-5Ghz rate for the last 20 years time.

We have started coping by engaging in parallelism through multi-core systems but due to Amdahls Law there is a diminishing return associated with adding more cores to your system.

On the "Instructions Per Cycle" front we're only making slow linear progression similar to other non-IT industries so there's not a lot of gain to be had from this either.

The reason why 2003-2013 feels like a bigger step is because it was a bigger step than 2013-2023. At least from a hardware perspective.

The big innovation we've made however is using largely parallelized GPU cores to accelerate machine learning on the extremely large data sets large social media sites have which has resulted in the current AI boom.

But yeah you are correct in your assessment that computer technology have largely stagnated since about ~2005.

12

hopelesslysarcastic t1_j6ag8v6 wrote

So what is your opinion on the next 10-15 years given your comment? Just genuinely curious as I haven’t heard this argument before and it’s fascinating

5

genshiryoku t1_j6ahc38 wrote

I think the next 5 years will be one of explosive AI progress but sudden and rapid stagnation and an AI winter will follow after that.

The reason I think this is because we're rapidly running out of training data as bigger and bigger models essentially get trained on all the available data on the internet. After that data is used up there will be nothing new for bigger models to train on.

Since hardware is already stagnating and data will be running out the only way to make progress would be to make breakthroughs on the AI architectural front, which is going to be linear in nature again.

I'm a Computer Scientist by trade and while I work with AI systems on a daily basis and keep up with AI papers I'm not an AI expert so I could be wrong on this front.

13

visarga t1_j6arwxp wrote

Generating data through RL like AlphaGo or "Evolution through Large Models" (ELM) seems to show a way out. Not all data is equally useful for the model, for example problem and task solving is more important that raw organic text.

Basically use LLM to generate and another system to evaluate, in order to filter the useful data examples.

5

DarkCeldori t1_j6bc0b8 wrote

The brain can learn even with few data. A baby that grows in a mostly empty room and hears his parents voices still becomes fully competent within a few years.

If ai begins to use brain like algorithms given it does millions of years of training, data will not be a problem.

4

PreferenceIll5328 t1_j6d98c1 wrote

The brain is also pre trained through billions of years of evolution. It isn’t a completely blank slate.

4

DarkCeldori t1_j6db3zz wrote

Iirc only 25MB of design data for the brain lies in the genome that is insufficient to specify 100~trillion connections. Most of the brain particularly the neocortex appears to be a blank slate. Outside prewiring such as overall connectivity between areas it appears it is the learning algorithms that are the special sauce.

There are plenty of animals with as much baked in and they show very limited intelligence.

1

GoSouthYoungMan t1_j6c4zym wrote

But the brain appears to have massively more effective compute than even the largest AI systems. The chinchilla scaling laws suggest we need much larger systems.

1

DarkCeldori t1_j6cxa7w wrote

I don't think the brain's prowess lies in more effective compute but rather in its more efficient algorithms.

IIRC mimicking brain sparsity allowed ANN to get 10x to 100x more performance. And that is just one aspect of brain algos. https://youtu.be/XoP3dnvj4P0

3

BehindThyCamel t1_j6aj21s wrote

Do you think that 5-year period of progress will include training models on audiovisual material (movies, documentaries, etc.), or are we too far technologically from the capacity required for that, or is that not even a direction to pursue?

2

DarkCeldori t1_j6bge0g wrote

Moores law is about miniaturization of transistors and doubling of transistor count. It is true we also used to get significant clock speed increases that we no longer do. But moores law didnt stop it only slowed down from every 18 months to every 2.5 years or something like that this happened last decade as a result of constant delays in the development of extreme ultraviolet lithography equipment but that is now solved and it is back to every 18 months iirc.

But thanks to moores law and koomeys law continuing we have seen constant increases in energy efficiency and computational power.

We are indeed facing some significant issues still some parts such as sram which is vital for cache sizes iirc have stopped scaling. Also it seems the reduction in cost per transistor has slowed or perhaps even ended recently. Microsoft estimated they wouldnt get cost reduction from moving to newer smaller transistors and thus chose to do two versions of xbox a cheap and an expensive one from the start.

If cost reduction is not solved we could be in serious trouble. As clearly a doubling of transistors requires at least a halving of transistor cost to be viable.

1