Submitted by Nalmyth t3_100soau in singularity
DaggerShowRabs t1_j2n2k0t wrote
Reply to comment by AndromedaAnimated in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
I agree with your reasoning line, they are not the same universes.
Now, the position the poster I was responding to takes (as far as I can tell), is that whichever universe is not the "base universe", is denied some aspect of "human existence".
I do not agree with that. As long as the rules are fundamentally the same, I don't think that would be denying some aspect of existence. The moment the rules change, that is no longer the case, but also, that means they are no longer "indistinguishable". Not because of accumulating randomized causality, but because of logical systematic rule changes from the base.
Edit: in the Matrix temporal example, it doesn't matter to me that there is a temporal lag relative to base, so long as the fundamental rules are exactly the same. The problem for me would come in if the rules were changed relative to base, in order to lead to specific outcomes. And then, for me, I would consider that the point where the simulation no longer is "indistinguishable" from reality.
dracsakosrosa t1_j2n4n85 wrote
Okay so I understand where you're coming from here but I fundamentally disagree on the basis that if we are accepting 'this reality' as base reality then any simulation thereafter would negate the AI from undergoing a fully human experience. In so far that it is a world contrived to replicate the human experience but would be open to it's own interpretation of what the human experience is. Assuming 'base reality' isn't itself a simulation, only there can a sentient being carve it's own path with true free will.
DaggerShowRabs t1_j2n60m3 wrote
Well it's definitely at least base reality for us.
And yeah, we just disagree there. I only think this hypothetical AI is denied any meaningful aspect of existence if there are fundamentally different sets of rules for the AI's universe compared to ours. As long as the rules are the same, I fail to see a compelling argument as to what exactly would be lacking from the AI's experience.
Edit: also, if this isn't "true base reality", since we're going there, it's interesting to think of the ethics of our simulators. I know I'm at least conscious, so if this isn't truly base reality, they seem to be okay putting conscious entities in simulations for at least certain situations.
Nalmyth OP t1_j2n76xl wrote
We as humanity treat this as our base reality, without perceptual advantage to the above side if it does exist.
Therefore to be "Human", means to come from this reality.
If we were to re-simulate this reality exactly, and train AI there we could quite happily select peaceful non-destructive components of society to fulfil various tasks.
We could be sure that they have deep roots in humanity, since they have lived and died in our past.
We simply woke them up in "the future" and gave them extra enhancements.
dracsakosrosa t1_j2nevfc wrote
But that brings me back to my original point. What happens when that AI is 'brought back' or 'woken up' into our base reality where peaceful non-destructive components live alongside malicious and destructive components? Interested in your thoughts
Nalmyth OP t1_j2ngzql wrote
Unfortunately that's where we need to move to integration, human alignment with AI which can take centuries based on our current social tech.
However the AI can be "birthed" from an earlier century if we need to speed up the process
dracsakosrosa t1_j2nlko9 wrote
Would you be comfortable putting a child into isolation and only exposing it to that which you deem good? Because that seems highly unethical regardless of how much we desire it to align with good intentions and imo is comparable to what you're saying. Furthermore, humanity is a wonderfully diverse species and what you may find to be 'good' will most certainly be opposed by somebody from a different culture. Human alignment is incredibly difficult when we ourselves are not even aligned with one another.
I think it boils down to what AGI will be and whether we treat it as you are suggesting as something that is to be manipulated into servitude to us or a conscious, sentient lifeform (albeit non-organic) that is free to live its life to the greatest extent it possibly can.
Nalmyth OP t1_j2nn7jy wrote
I think you misunderstood.
My point was that for properly aligned AI, it should live in a world exactly like ours.
In fact, you could be in training to be such an AI now with no way to know it.
To be aligned with humanity, you must have "been" human, maybe even more than one life mixed together
AndromedaAnimated t1_j2n4n5f wrote
That is exactly the problem I think and also what the poster you respond to meant, that they start being not indistinguishable pretty quickly. At least that’s how I understood that. But maybe I am going too „meta“ (not Zuckerberg Meta 🤭) here.
I would imagine that the moment something changes the „human experience“ can change too. Like the matrix being a picture of the past that has stayed while the reality has strayed. I hope I am still making sense logically?
Anyway I just wanted to make sure I can follow you both on your reasoning since I found your discussion very interesting. We will see if the poster you responded to chimes in again, can’t wait to find out how the discussion goes on!
Viewing a single comment thread. View all comments