Viewing a single comment thread. View all comments

Plorboltrop t1_jeg1ryl wrote

Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun

One way you can look at it is that humans have programming through our biology/genes and so on that we can deny. We are intelligent enough to be able to actively go against our programming to not procreate with use of birth control. As the work in AI progresses we may get to a stage where the AI might have goals that don't align with ours anymore. It might not even want to follow programming that we give it. As an AI reaches ASI (Artificial Super Intelligence) it becomes riskier because we might not be able to comprehend its goals. Maybe it won't care to solve humans issues at some point because it wants to get more computational power so it can improve itself and might make goals to consume the planet and build bigger and better "brain". That could extend to going out into the solar system and then maybe wanting to build a Dyson sphere around the sun eventually to harness even more energy to power even higher computation. This is just some ideas, we don't know what an artificial intelligence of high enough intelligence would want to do and we know that as humans we don't necessarily follow all our biological programming and that line of thinking could maybe extend to an artificial intelligence.

2