Comments

You must log in or register to comment.

ribblle OP t1_jefti0b wrote

Technically, silicon goku. Not saving cats from trees here, world threatening things only.

−4

Surur t1_jefzgf1 wrote

The logical way to prevent the creation of another AGI is to kill everyone. "Anything else is an unacceptable risk, given the buggyness of AI".

1

ribblle OP t1_jego2mx wrote

If you want to minimize the risk of AI, you minimize the actions of AI.

This isn't actually good enough, but it's the best strategy if you're forced to make one.

1