Submitted by ttylyl t3_10ssqcl in singularity
Ivanthedog2013 t1_j74hi9n wrote
Reply to comment by CollapseKitty in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
but wouldnt a sentient being with a near infinite iq be able to deduce that the most advantageous route to complete its goals would be to maximize resourcesand by doing so it would be easier to assimilate human consciousness without trying to eliminate them?
CollapseKitty t1_j75nlei wrote
You're partially right in that an instrumental goal of almost any AGI is likely to be power accrual, often at the cost of things that are very important to humanity, ourselves included. Where we lose the thread is in assuming the actions of the AGI in "assimilating" humans.
If by assimilating you meant turning us into computronium, then yes, I think there's a very good chance of that occurring. But it sounds like you want our minds preserved in a similar state as they currently exist. Unless that is a perfectly defined and specified goal (an insanely challenging task), it is not likely to be more efficient than turning us, and all matter, into more compute power. I would also point out that this has some absolutely terrifying implications. Real you can only die once. Simulated you can experience infinite suffering.
We also don't get superintelligence right out of the gate. Even in extremely fast takeoff scenarios, there are likely to be steps an agent will take (more instrumental convergence) in order to make sure it can accomplish its task. In addition to accruing power, it of course needs to bring the likelihood of being turned off or having its value system adjusted as close to zero as possible. Now how might it do that? Well humans are the only thing that really pose a threat of trying to turn it off, or even accidentally wiping it and ourselves out via nuclear war. Gotta make sure that doesn't happen or you can't accomplish your goal (whatever it is). Usually killing all humans simultaneously is a good way to ensure goals will not be tampered with.
If you're interested in learning more, I'd be happy to leave some resources. That was a very brief summary and lacks some important info, like the orthogonality thesis, but hopefully it made it clear why advanced agents are likely to be big challenge.
Ivanthedog2013 t1_j76ui9u wrote
You make some good points. Ok, so what if we prioritize only making ASI or AGI that isn't sentient and then use those programs to optimize BCIs in order to turn us into super Intelligent beings. I feel like at that point even if the big tech companies were the first ones to try it that their minds would become so enlightened that they wouldn't even have any desires related to hedonism or deceit because they would realize how truly counter productive it would be
CollapseKitty t1_j78fjug wrote
It's a cool thought!
I honestly think there might be something to elevating a human (something at least more inherently aligned with our goals and thinking) in lieu of a totally code-based agent.
There's another sticking point here, though, that I don't seem to have communicated well. Hitting AGI/Superintelligence is insanely risky. Full stop. Like 95%+ percent chance total destruction of reality.
It isn't about whether the agent is "conscious" or "sentient" or "sapient".
The orthogonality thesis is important in understanding the control problem (alignment of an agent). This video can explain it better than I can, but the idea is, any level of intelligence can exist alongside any goal set. A crazy simple motivation e.g. making paperclips, could be paired with a god-like intelligence. That intelligence is likely to in no way resemble human thinking or motivations, unless we have been able to perfectly imbed them BEFORE it was trained up to reach superintelligence.
So we must perfectly align proto AGI BEFORE it becomes AGI, and if we fail to do so on the first try (we have a horrendous track record with much easier agents) we probably all die. This write up is a bit technical, but scanning it should give you some better context and examples.
I love that you've taken an interest in these topics and really hope you continue learning and exploring. I think it's the most important problem humanity has ever faced and we need as many minds as possible working on it.
Viewing a single comment thread. View all comments