Viewing a single comment thread. View all comments

Ortus12 t1_irct965 wrote

>ASI sees itself as a conscious living organism, which means it (also) values survival and reproduction.

You're anthropomorphizing the ASI. The ASI will value what it is programmed to value.

But in so far as making sure it's values do not change, that is known as the Ai alignment problem:

https://en.wikipedia.org/wiki/AI_alignment

There are many proposed solutions. In order to understand the solutions, you'd have to understand how Ai algorithms are structured, which would require far too much text than can fit in the comment (several books worth).

But there have been many brilliant people working for many years to come up with many solutions to this problem, and at every single step of Ai progress, Ai ethicists ensure the leading Ai's are positive for humanity.

These labs take the responsibility of benefiting mankind and not doing evil seriously, and their actions have demonstrated this.

Bad actors will get access to ASI, but that doesn't matter if more moral actors are using strong ASI and that is the direction we are heading into.

1