hervalfreire
hervalfreire t1_ja79j62 wrote
Reply to comment by boersc in So what should we do? by googoobah
Machine Learning (“mass training”?) didn’t exist 40 years ago. Cases like the tank one you described used a completely different technique that didn’t utilize RNNs or the like. Other than hardware capabilities, there’s been a big number of breakthroughs in the past 2-3 decades or so, from LSTMs to diffusion models and LLMs. It’s 100% not even close to what we did back in the 90s…
hervalfreire t1_ja793k1 wrote
Reply to comment by Enzo-chan in So what should we do? by googoobah
It always sounds more credible, as things progress. We’re still VERY far from a singularity or AGIs, the best computers can do today is language models (something we already know and do for decades), just faster/larger ones.
Yes, we’re about to see a big impact in professions that mostly rely on “creativity” and memorization, but I’d not worry about a “singularity” happening any time soon.
hervalfreire t1_jee4963 wrote
Reply to comment by Unfocusedbrain in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
“An AGI might require only 10-1000 accelerators” what
We don’t even have any idea of what an AGI would look like, let alone how many GPUs it’d require (or whether it’d be possible to have an AGI running on GPUs at all)