Comments

You must log in or register to comment.

Iffykindofguy t1_jefu7bp wrote

Thank goodness youre not in charge

7

ribblle OP t1_jego522 wrote

You realize most people don't have faith in the singularity being a safe goal.

1

Surur t1_jefzgf1 wrote

The logical way to prevent the creation of another AGI is to kill everyone. "Anything else is an unacceptable risk, given the buggyness of AI".

1

ribblle OP t1_jego2mx wrote

If you want to minimize the risk of AI, you minimize the actions of AI.

This isn't actually good enough, but it's the best strategy if you're forced to make one.

1

ribblle OP t1_jefti0b wrote

Technically, silicon goku. Not saving cats from trees here, world threatening things only.

−4