shoegraze t1_j9s22kq wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yep if we die from AI it will be from bioterrorism well before we get enslaved by a robot army. And the bioterrorism stuff could even happen before “AGI” rears its head.
Viewing a single comment thread. View all comments