Submitted by UnionPacifik t3_xzcwti in singularity
Just non-locally, I mean. If we look at the potential danger over time of AI with misaligned goals to propagate across the universe and the fact we live in a universe of irreducible laws and principles that result in a relativistic, infinitely changing universe a pretty good sign that reality IS a "simulation." With billions of other potential species facing the same thing, it seems likely some species already worked out a solution that prevents this from happening (for example, AGI is collective intelligence by default, so it always takes in the needs of the many in all decision making) that's hard coded into the physics of our reality.
​
It seems more likely that we are the only ones in the whole cosmos to ever figure out how to bootstrap consciousness, right?
sticky_symbols t1_irllusl wrote
If, as I think you are assuming, reality is a benign simulation, then we're probably safe from the dangers of unaligned AGI. We would also be safe from a lot of other dangers if we're in a benign simulation. And that would be awesome. I think we might be, but it's far from certain. Therefore I would like to solve the alignment problem.