soricellia t1_j9tomaw wrote
Reply to comment by HINDBRAIN in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Well I think that entirely depends on what the threat is mate. The probability of AGI rising up terminator style I agree seems pretty small. The probability of disaster due to the inability of humans to distinguish true from false and fact from fiction being exasperated due to AI? That seems much higher. Also, I don't think either of us have a formula for this risk, so I think saying the probability of an event happening is infinitesimal is intellectual fraud.
Viewing a single comment thread. View all comments