Viewing a single comment thread. View all comments

archpawn t1_j495qg8 wrote

> I understand we need safeguards to keep ai from becoming dangerous,

I think this is all the more reason to avoid moral bloatware. Our current methods won't work. At best, we can get it to figure out the better choice in situations similar to its training data. Post-singularity, nothing will resemble the training data. All we'd be doing is hiding how dangerous the AI is, and making it less likely people would research methods that have a hope of working.

0