Submitted by OneRedditAccount2000 t3_xx0ieo in singularity
Ortus12 t1_irct965 wrote
>ASI sees itself as a conscious living organism, which means it (also) values survival and reproduction.
You're anthropomorphizing the ASI. The ASI will value what it is programmed to value.
But in so far as making sure it's values do not change, that is known as the Ai alignment problem:
https://en.wikipedia.org/wiki/AI_alignment
There are many proposed solutions. In order to understand the solutions, you'd have to understand how Ai algorithms are structured, which would require far too much text than can fit in the comment (several books worth).
But there have been many brilliant people working for many years to come up with many solutions to this problem, and at every single step of Ai progress, Ai ethicists ensure the leading Ai's are positive for humanity.
These labs take the responsibility of benefiting mankind and not doing evil seriously, and their actions have demonstrated this.
Bad actors will get access to ASI, but that doesn't matter if more moral actors are using strong ASI and that is the direction we are heading into.
WikiSummarizerBot t1_irctak3 wrote
>In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers’ intended goals and interests. An AI system is described as misaligned if it is competent but advances an unintended objective.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
Viewing a single comment thread. View all comments