Submitted by Tea_Pearce t3_10aq9id in MachineLearning
currentscurrents t1_j4702g0 wrote
Reply to comment by mugbrushteeth in [D] Bitter lesson 2.0? by Tea_Pearce
Compute is going to get cheaper over time though. My phone today has the FLOPs of a supercomputer from 1999.
Also if LLMs become the next big thing you can expect GPU manufacturers to include more VRAM and more hardware acceleration directed at them.
RandomCandor t1_j47bx4j wrote
To me, all that means is that the lay people will always be a generation behind from what the rich can afford to run
currentscurrents t1_j48csbo wrote
If it is true that performance scales infinitely with compute power - and I kinda hope it is, since that would make superhuman AI achievable - datacenters will always be smarter than PCs.
That said, I'm not sure that it does scale infinitely. You need not just more compute but also more data, and there's only so much data out there. GPT-4 reportedly won't be any bigger than GPT-3 because even terabytes of scraped internet data isn't enough to train a larger model.
BarockMoebelSecond t1_j48mepq wrote
Which is and has been the Status Quo for the entire history of computing, I don't see how that's a new development?
currentscurrents t1_j490rvn wrote
It's meaningful right now because there's a threshold where LLMs become awesome, but getting there requires expensive specialized GPUs.
I'm hoping in a few years consumer GPUs will have 80GB of VRAM or whatever and we'll be able to run them locally. While datacenters will still have more compute, it won't matter as much since there's a limit where larger models would require more training data than exists.
Playful_Ad_7555 t1_j49k8p2 wrote
silicon computing is already very close to its limit based on foreseeable technology. the exponential explosion in computing power and available data from 2000-2020 isnt going to be replicated
Opposite-Platypus-99 t1_j4ahpg6 wrote
now, can you confirm you can run arbitrary software on your phone?
Viewing a single comment thread. View all comments