Submitted by Calm_Bonus_6464 t3_zyuo28 in singularity
Calm_Bonus_6464
Calm_Bonus_6464 t1_j25dzi9 wrote
I think Cotra's prediction will end up being correct, 2040 - 2050 is the window for AGI and ASI should also be achieved in that window
Calm_Bonus_6464 t1_j25dnei wrote
Reply to comment by Cryptizard in AI timelines: What do experts in artificial intelligence expect for the future? by kmtrp
ChatGpt wasn't even a thing when this survey was taken.
Calm_Bonus_6464 t1_j234gd5 wrote
Reply to Digitism by ZoomedAndDoomed
No, personally I'll just use AI to strengthen my existing faith through greater intelligence.
Calm_Bonus_6464 t1_j21wh6p wrote
I'd guess by that point we'd have means for intelligence augmentation, making University worthless
Calm_Bonus_6464 t1_j1y8x81 wrote
Reply to comment by calbhollo in Weird question, are there any well-known AI researchers with a leftist view? by [deleted]
Here's a few for Germany. But you can just pretty much translate "leading AI researchers" into major European languages like Dutch, German, French etc and search and you'll get a bunch of results from different countries across Europe. You'll just need to use a translator if you only speak English, but its of course good to get the European perspective as well.
Calm_Bonus_6464 t1_j1y7brz wrote
Reply to comment by Webemperor in Concerns about the near future and the current gatekeepers of AI by dracount
Agree to disagree I guess.
Calm_Bonus_6464 t1_j1y6wmx wrote
Reply to comment by Webemperor in Concerns about the near future and the current gatekeepers of AI by dracount
Can you give an example of that in Finland or Denmark?
Calm_Bonus_6464 t1_j1y6kdc wrote
In Europe, definitely since European politics tends to lean further left (on economic issues that is). There's just a language barrier since they're often not tweeting in English. In America id say the average AI researcher would be a liberal
Calm_Bonus_6464 t1_j1y5so5 wrote
Reply to comment by Webemperor in Concerns about the near future and the current gatekeepers of AI by dracount
It depends, countries like France and Portugal probably aren't that different from the US, but northern European countries like Denmark, Finland, Sweden, Switzerland, Germany etc have the lowest levels of corruption in the world and are Europe's leaders in AI and big playmakers in EU decisions.
Calm_Bonus_6464 t1_j1xympq wrote
Reply to comment by Webemperor in Concerns about the near future and the current gatekeepers of AI by dracount
> In West this is extremely unlikely since Western governments are essentially owned by corporations
US perhaps, but not Europe. I could actually see the EU attempting to regulate it.
Calm_Bonus_6464 t1_j1x6ega wrote
Reply to comment by TheLastSamurai in Concerns about the near future and the current gatekeepers of AI by dracount
Even if it came in the West, China isn't going to stop developing AI simply because the West chooses to regulate it.
And good luck regulating a being more intelligent than you once ASI happens.
Calm_Bonus_6464 t1_j1x64vo wrote
Reply to comment by TheLastSamurai in Concerns about the near future and the current gatekeepers of AI by dracount
Well, that's not going to happen to be blunt. And given all the possibilities for good, I wouldn't want AI progress to stop.
Calm_Bonus_6464 t1_j1x5pvl wrote
Reply to comment by TheLastSamurai in Concerns about the near future and the current gatekeepers of AI by dracount
So you want to stop the development of AI? Because AGI/ASI and Singularity inevitably means the above happens. The only way to stop it is to halt technological progress.
Calm_Bonus_6464 t1_j1vy8mc wrote
I don't know why you're assuming we have a choice. If we have beings infinitely more intelligent than us, there's no possible way we can retain control. In a worst case scenario, AI could even be hostile towards humans and destroy our species, which is precisely what people like Stephen Hawking warned us about.
AI governance is inevitable, and there's nothing we can do to stop it. For the first time in 300,000 years we will no longer be Earths rulers, and we ill have to come to accept this.
Calm_Bonus_6464 t1_j1srzke wrote
Reply to comment by OldWorldRevival in Will the singularity require political revolution to be of maximum benefit? If so, what ideas need to change? by OldWorldRevival
ASI does come before singularity. And ASI would solve much of those concerns. ASI has no reason to be any more benevolent to elites compared to anyone else. Elites cannot control a being that is far more intelligent than them. You're thinking AGI, not ASI, both have to happen before Singularity.
Calm_Bonus_6464 t1_j1snyzn wrote
Reply to comment by OldWorldRevival in Will the singularity require political revolution to be of maximum benefit? If so, what ideas need to change? by OldWorldRevival
But we're not just talking about AGI here, Singularity would require ASI. Not just human level intelligence, but far beyond the intelligence capabilities of all humans who have ever lived. A being that intelligent would pretty easily be able to orchestrate political takeovers, or even destroy humans if it so desired.
Calm_Bonus_6464 t1_j1smhpe wrote
Reply to Will the singularity require political revolution to be of maximum benefit? If so, what ideas need to change? by OldWorldRevival
Once singularity is achieved its not going to matter what your political beliefs are, AI would be calling the shots whether you like it or not.
For the first time in 300,000 years we will no longer be the most intelligent form of life on Earth, and this means beings far more intelligent than us will decide humanity's future. How that happens is anyone's guess. A post singularity world will be so radically different from today modern economic theories and solutions will likely have no place.
Calm_Bonus_6464 t1_j1sl6ox wrote
Reply to comment by Reeferchief in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
Was this written by ChatGpt lol
Calm_Bonus_6464 t1_j1sfta7 wrote
Reply to comment by 4e_65_6f in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
You're assuming AI would be benevolent enough to delegate power to humans, I see no reason to believe that in a post singularity world. What's stopping AI from deciding what's best for humanity if its infinitely more intelligent than us?
What you're describing is how governance will be post AGI. By that point it will be just recommendations. But ASI and Singularity change everything.
Calm_Bonus_6464 t1_j1sexmt wrote
Reply to comment by 4e_65_6f in What is the sub's prevailing political ideology? (post singularity) by 4e_65_6f
Once we achieve AGI I believe those will be just recommendations, but once we achieve ASI and Singularity and have beings infinitely more intelligent than us, I can't imagine human governance continuing. If AI wanted to govern, there would be no way of stopping it. And even if somehow AI was benevolent enough to delegate this power to humans, why would we even want to continue governing ourselves when we have what's equivalent to God to make those decisions for us? We probably wouldn't even have the necessary intelligence to govern in a post-singularity world.
Calm_Bonus_6464 t1_j1sd5e5 wrote
Once singularity is achieved its not going to matter what your political beliefs are, AI would be calling the shots whether you like it or not.
Calm_Bonus_6464 t1_j27gttb wrote
Reply to what do you a day in life would be like for a regular person in type III civilization? by RGthehuman
Humanity will merge with AI, AI will give us the tools for intelligence augmentation