Calm_Bonus_6464

Calm_Bonus_6464 t1_j1y8x81 wrote

Here's a few for Germany. But you can just pretty much translate "leading AI researchers" into major European languages like Dutch, German, French etc and search and you'll get a bunch of results from different countries across Europe. You'll just need to use a translator if you only speak English, but its of course good to get the European perspective as well.

2

Calm_Bonus_6464 t1_j1vy8mc wrote

I don't know why you're assuming we have a choice. If we have beings infinitely more intelligent than us, there's no possible way we can retain control. In a worst case scenario, AI could even be hostile towards humans and destroy our species, which is precisely what people like Stephen Hawking warned us about.

AI governance is inevitable, and there's nothing we can do to stop it. For the first time in 300,000 years we will no longer be Earths rulers, and we ill have to come to accept this.

14

Calm_Bonus_6464 t1_j1srzke wrote

ASI does come before singularity. And ASI would solve much of those concerns. ASI has no reason to be any more benevolent to elites compared to anyone else. Elites cannot control a being that is far more intelligent than them. You're thinking AGI, not ASI, both have to happen before Singularity.

0

Calm_Bonus_6464 t1_j1snyzn wrote

But we're not just talking about AGI here, Singularity would require ASI. Not just human level intelligence, but far beyond the intelligence capabilities of all humans who have ever lived. A being that intelligent would pretty easily be able to orchestrate political takeovers, or even destroy humans if it so desired.

2

Calm_Bonus_6464 t1_j1smhpe wrote

Once singularity is achieved its not going to matter what your political beliefs are, AI would be calling the shots whether you like it or not.

For the first time in 300,000 years we will no longer be the most intelligent form of life on Earth, and this means beings far more intelligent than us will decide humanity's future. How that happens is anyone's guess. A post singularity world will be so radically different from today modern economic theories and solutions will likely have no place.

7

Calm_Bonus_6464 t1_j1sfta7 wrote

You're assuming AI would be benevolent enough to delegate power to humans, I see no reason to believe that in a post singularity world. What's stopping AI from deciding what's best for humanity if its infinitely more intelligent than us?

What you're describing is how governance will be post AGI. By that point it will be just recommendations. But ASI and Singularity change everything.

1

Calm_Bonus_6464 t1_j1sexmt wrote

Once we achieve AGI I believe those will be just recommendations, but once we achieve ASI and Singularity and have beings infinitely more intelligent than us, I can't imagine human governance continuing. If AI wanted to govern, there would be no way of stopping it. And even if somehow AI was benevolent enough to delegate this power to humans, why would we even want to continue governing ourselves when we have what's equivalent to God to make those decisions for us? We probably wouldn't even have the necessary intelligence to govern in a post-singularity world.

3