Submitted by Nalmyth t3_100soau in singularity
Nalmyth OP t1_j2qwoaf wrote
Reply to comment by lahwran_ in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Exactly 👍
It should not be a cruelty thing, give them a chance to live as a human and therefore come to deeply understand us.
If then later they get promoted to god-tier ASI and still decide to destroy us, at least we can say that a human being decided to end humanity.
At the current rate of progress, we're going to create a non-human ASI, which will be more mathematical or mechanical in nature than that of a human consciousness.
Due to this the likelihood of AI alignment is very low.
Viewing a single comment thread. View all comments