soricellia t1_j9tn2xi wrote
Reply to comment by HINDBRAIN in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't even think this is a strawman mate you've mischaracterized me so badly it's basically ad hominem.
HINDBRAIN t1_j9tnkfa wrote
You're basically a doomsday cultist, just hiding it behind Sci-Fi language. "The scale of the threat" is irrelevant if the probability of it happening is infinestimal.
soricellia t1_j9tomaw wrote
Well I think that entirely depends on what the threat is mate. The probability of AGI rising up terminator style I agree seems pretty small. The probability of disaster due to the inability of humans to distinguish true from false and fact from fiction being exasperated due to AI? That seems much higher. Also, I don't think either of us have a formula for this risk, so I think saying the probability of an event happening is infinitesimal is intellectual fraud.
Viewing a single comment thread. View all comments