turnip_burrito t1_j58uhwm wrote
Reply to comment by 23235 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
> We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.
What you are calling modelling and encouragement here is what I meant to include under the umbrella term of "enforcement". Just different methods of enforcing values.
We will need to put in some values by hand ahead of time though. One value is mimicking, or wanting to please humans, or empathy, to a degree, like a child does, otherwise I don't think any amount of trying to role model or teach will actually leave its mark. Like, it would have no reason to care.
23235 t1_j5mxber wrote
Enforcement is the act of compelling obedience of or compliance with a law, rule, or obligation. That compulsion, that use of force is what separates enforcement from nonviolent methods of teaching.
There are many ways to inculcate values, not all are punitive or utilize force. It's a spectrum.
We would be wise to concern ourselves early on how to inculcate values. I agree with you that AI having no reason to care about human values is something we should be concerned with. I fear we're already beyond the point where AI values can be put in 'by hand.'
Thank you for your response.
turnip_burrito t1_j5my4f9 wrote
Well then, I used the wrong word. "Inculcate" or "instill" then.
Viewing a single comment thread. View all comments