Viewing a single comment thread. View all comments

OsakaWilson t1_iulfo6g wrote

This is well covered in the book Life 3.0. The conclusion is that since there is no way to recognize an AI project externally (as there is, for example, a nuclear program), any one member of an agreement acting in bad faith would leave all the rest behind. A simple risk analysis reaches the conclusion in the current global community that, although it makes sense to join an agreement, it would not make sense to actually refrain from creating an AI.

The suggestion that mere humans could keep a super-intelligence confined is also disposed of pretty thoroughly.

1

TheLastSamurai t1_iunh200 wrote

Why would it not make sense to refrain from creating AGI? I would love to see an actual risk/benefit analysis done.

1

OsakaWilson t1_iuohcv8 wrote

If one other group makes it, they rule the world.

1

TheLastSamurai t1_iuoifrc wrote

Some bad game theory. So we all have to try because another might, but if they do so it could literally end humanity?

1

OsakaWilson t1_iuoxwq3 wrote

Yes. You are also describing nuclear weapons, which are verifiable, and nearly every party that could, created them. I'm not saying it is good, I'm saying in an environment of distrust, that will be the result. It's not even a national decision. Multiple companies worldwide could pursue it. All it takes is one group believing they can contain it while they get rich and it's over.

1