First point is like saying phishing was nonexistent before we invented computers and internet, so we dont have to worry about it once we invent them. There have been no AGI. There have been no comparable events. Basing it on fact that asteroid killing all life on earth is unlikely does not make sense.
Scyther99 t1_j9zomj7 wrote
Reply to comment by VirtualHat in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
First point is like saying phishing was nonexistent before we invented computers and internet, so we dont have to worry about it once we invent them. There have been no AGI. There have been no comparable events. Basing it on fact that asteroid killing all life on earth is unlikely does not make sense.