Submitted by diener1 t3_z0v68u in singularity
DaggerShowRabs t1_ix810ie wrote
Reply to comment by TupewDeZew in How do you think about the future of AI? by diener1
Agreed. The thing that terrifies me too is that there are so many ways it could go wrong.
It's probably easier to build an AGI than it is to build an AGI that is confirmed to be goal-aligned with humanity. If it isn't goal-aligned, you're basically rolling a pair of D20s and hoping you land on double 20s.
nblack88 t1_ix8sinj wrote
Good thing we have to invent it. At least we're first in the initiative order, so we have a chance to roll. After that chance, that's it! Here's to hoping we avoid the TPK.
Viewing a single comment thread. View all comments