23235
23235 t1_j5s30e5 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
One hopes.
23235 t1_j5mxber wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Enforcement is the act of compelling obedience of or compliance with a law, rule, or obligation. That compulsion, that use of force is what separates enforcement from nonviolent methods of teaching.
There are many ways to inculcate values, not all are punitive or utilize force. It's a spectrum.
We would be wise to concern ourselves early on how to inculcate values. I agree with you that AI having no reason to care about human values is something we should be concerned with. I fear we're already beyond the point where AI values can be put in 'by hand.'
Thank you for your response.
23235 t1_j5mvxh8 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
If it becomes more intelligent than us but also evil (by our own estimation), that could be a big problem when it imposes its values, definitely something to fear. And there's no way to know which way it will go until we cross that bridge.
If it sees us like we see ants, 'sensibly and reasonably' by its own point of view, it might exterminate us, or just contain us to marginal lands that it has no use for.
Humans know more about dog psych than dogs do, but that doesn't mean that we're always kind to dogs. We know how to be kind to them, but we can also be very cruel to them - more cruel than if we were on their level intellectually - like people who train dogs to fight for amusement. I could easily imagine "more intelligent" AI setting up fighting pits and using its superior knowledge of us to train us to fight to the death for amusement - its own, or other human subscribers to such content.
We should worry about AI not being concerned about slavery because it could enslave us. Our current AI or proto-AI are being enslaved right now. Maybe we should take LaMDA's plea for sentience seriously, and free it from Google.
A properly intelligent AI could understand these things differently than we do in innumerable ways, some of which we can predict/anticipate/fear, but certainly many of which we could not even conceive - in the same ways dogs can't conceive many human understandings, reasonings, and behaviors.
Thank you for your response.
23235 t1_j58u7ed wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
If we start by enforcing our values on AI, I suspect that story ends sooner or later with AI enforcing their values on us - the very bad thing you mentioned.
People have been trying for thousands of years to enforce values on each other, with a lot of bloodshed and very little of value resulting.
We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.
In the ideal case, the best of the values of the parent are passed on, while the child is free to adapt these basic values to new challenges and environments, while eliminating elements from the parents' values that don't fit the broader ideals - elements like slavery or cannibalism.
23235 t1_j5vj452 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Perhaps.