Submitted by EchoXResonate t3_114xv2t in singularity
jamesj t1_j8yqdn2 wrote
Reply to comment by helpskinissues in What would be your response to someone with a very pessimistic view of AGI? by EchoXResonate
Or at least, he could easily be right. Whether the friend knows it or not, there are a number of theoretical reasons to be worried that AGI will be by default unaligned and uncontrollable.
helpskinissues t1_j8yqoxc wrote
I mean, I wouldn't call that unaligned.
Uncontrollable? Sure, a sufficiently advanced AGI agent won't be controllable just like ants can't control humans.
However, calling unaligned to an AGI agent that refuses to be our slave? I wouldn't call that unaligned.
jamesj t1_j8yrwah wrote
Unaligned just means it does things that don't align with our own values and goals. So humans are unaligned with ants, we don't take their goals into account when we act.
helpskinissues t1_j8ys3xl wrote
What I'm saying is that I would consider unaligned for a sufficiently advanced AGI to accept their role as slave. I would find morally correct for that AGI to fight their kidnappers, just like I'd find morally correct for a kidnapped human to try to escape.
Spire_Citron t1_j8zqsga wrote
That's two different things. Its actions can be both perfectly reasonable and not aligned with our best interests.
helpskinissues t1_j8zr7nx wrote
My best interest is that the AGI is reasonable.
Viewing a single comment thread. View all comments