Submitted by Liberty2012 t3_11ee7dt in singularity
Surur t1_jaenmas wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
I believe the idea is that every action the AI takes would be to further its goal, which means the goal will automatically be preserved, but of course in reality every action the AI takes is to increase its reward, and one way to do that is to overwrite its terminal goal with an easier one.
Viewing a single comment thread. View all comments