Submitted by popupideas t3_yz1uwp in singularity
popupideas OP t1_ix4gbdd wrote
Reply to comment by turnip_burrito in Key principles/restrictions of AI to avoid it destroying humanity by popupideas
My idea is similar to replicative drift. Where after every copy there is a slight degradation or difference. So when AI continues to make choices based on the original objective the real intent of the objective is drifted away from.
Even though the objective is still there it will begin to make choices that are unexpected. And may take a route to accomplish the objective that is unforeseen and have unexpected consequences.
May not be the best name for it but not my expertise.
turnip_burrito t1_ix4gnzk wrote
Interesting idea, could be a problem. Definitely something to consider.
Viewing a single comment thread. View all comments