Submitted by popupideas t3_yz1uwp in singularity
turnip_burrito t1_iwyd3mf wrote
Any AGI with an accurate enough world model would understand what a person means when they give an instruction. We can consider the implications of this.
popupideas OP t1_iwyuh8a wrote
I feel that the nuanced nature of communication would be a problem. And the ai would begin to wonder way from our original intent through decision drift. Plus I think it would be wise to have general parameters that all programmers must stay inside of. Because humans are not nice.
turnip_burrito t1_ix4c8b2 wrote
What is decision drift?
popupideas OP t1_ix4gbdd wrote
My idea is similar to replicative drift. Where after every copy there is a slight degradation or difference. So when AI continues to make choices based on the original objective the real intent of the objective is drifted away from.
Even though the objective is still there it will begin to make choices that are unexpected. And may take a route to accomplish the objective that is unforeseen and have unexpected consequences.
May not be the best name for it but not my expertise.
turnip_burrito t1_ix4gnzk wrote
Interesting idea, could be a problem. Definitely something to consider.
Viewing a single comment thread. View all comments