CheriGrove t1_j5r5sok wrote
Reply to comment by GayHitIer in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
Is there a solid metric for "when" singularity "happens"? I don't entirely understand the concept, I came into this sub thinking it was about black holes.
SoylentRox t1_j5slrzy wrote
The singularity is a prediction of exponential growth once AI is approximately as smart as a human being.
So you might hear in the news that tsmc has cancelled all chip orders except for AI customers, and there are zero new devices anywhere that are recently made with advanced silicon in them.
You might see in the news that the city of Shenzhen has twice as much land covered with factories as it did last month.
Then another month and it's doubled again.
And so on. Or if the USA has the tech for themselves similar exponential growth.
At some point you would probably suddenly see whoever has the tech launching tens of thousands of rockets and at night you would see lights on the Moon..that double every few weeks how much surface is covered.
This is the metric: anything that is clear and sustained exponential growth driven by AI systems at least as smart as humans.
Smart meaning they score as well as humans on a large set of objective tests.
There are a lot of details we don't know - would the factories in the Moon even be visible at night or do the robots see in IR - but that's the signature of it.
CheriGrove t1_j5sm5jn wrote
"As smart as" is difficult to measure and judge. I think by 1980s standards, we might already be at something like a singularity as they might have judged it.
SoylentRox t1_j5smhx9 wrote
Yes, but, intelligence isn't just depth, it's breadth.
In this case, to make possible exponential growth, AI has to be able to do most of the steps required to build more AI (and useful things for humans to get money).
Right now that means AI needs to be capable of controlling many robots, doing many separate tasks that need to be done (to ultimately manufacture more chips and power generators and so on).
So while chatGPT seems to be really close to passing a Turing test, the papers for robotics are like this : https://www.deepmind.com/blog/building-interactive-agents-in-video-game-worlds
And not able, yet, to actually control this: https://www.youtube.com/watch?v=XPVC4IyRTG8 . (that boston dynamics machine is kinda hard coded, it is not being driven by current gen AI)
I think we're close and see for the last steps people can use chatGPT/codex to help them write the code, there's a lot more money to invest in this, they can use AI to design the chips for even better compute : lots of ways to make the last steps take less time than expected.
CheriGrove t1_j5sn4z8 wrote
It's fascinating, existential, hopeful, and worrisome to the n'th degree, here's hoping its post scarcity utopia, rather than something Orwell could never have fathomed.
Viewing a single comment thread. View all comments