> , I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".
I mean, just look at the current anti-vaccine movement. You just described the original Andrew Wakefield paper about vaccines causing autism. We don't need AI for this to happen, just a very credulous and gullible press.
VioletCrow t1_j9smth5 wrote
Reply to comment by perspectiveiskey in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> , I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".
I mean, just look at the current anti-vaccine movement. You just described the original Andrew Wakefield paper about vaccines causing autism. We don't need AI for this to happen, just a very credulous and gullible press.