governingsalmon t1_j9svhv8 wrote
Reply to comment by VioletCrow in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I agree that we don’t necessarily need AI for nefarious actors to spread scientific misinformation, but I do think AI introduces another tool or weapon that could used by the Andrew Wakefields of the future in a way that might pose unique dangers to public health and public trust in scientific institutions.
I’m not sure whether it was malevolence or incompetence that has mostly contributed to vaccine misinformation, but if one intentionally sought to produce fake but convincing scientific-seeming work, wouldn’t something like a generative language model allow them to do so at a massively higher scale with little knowledge of a specific field?
I’ve been wondering what would happen if someone flooded a set of journals with hundreds of AI-written manuscripts without any real underlying data. One could even have all the results support a given narrative. Journals might develop intelligent ways of counteracting this but it might pose a unique problem in the future.
Viewing a single comment thread. View all comments