Submitted by Y3VkZGxl t3_12262l5 in singularity
turnip_burrito t1_jdozhgd wrote
I think it's a crime to make an AI that is ambivalent toward humans, because of the consequential harm comes to humanity as a result.
I believe it should be benevolent and helpful toward humans as a bias, and work together with humans to seek better moralities.
Y3VkZGxl OP t1_jdp0vqy wrote
It's interesting to consider whether that's even possible. If an AI is truly sentient and reasons that there's a more important objective than protecting humans (e.g. protecting all other sentient beings), can we convince it that it should be biased towards humans or would it ignore us?
turnip_burrito t1_jdp1eql wrote
Even sentient humans, regardless of intelligence level, have varying priorities. It's not guaranteed, but it is possible to align people's moral principles along different priorities depending on their upbringing environment. And all humans are aligned to do things like eat.
I'm thinking of the AI as a deterministic machine. If we try to align it toward human values, I think there's a good chance its behavior will "flow" along those values, to put it a little figuratively.
I do think protecting sentient beings is valued by many people by the way, so that can transfer to a degree to a human priority-aligned AI.
Y3VkZGxl OP t1_jdp1vpb wrote
That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?
That's not to say we shouldn't try, and I do agree with your point.
It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.
turnip_burrito t1_jdp3u8u wrote
>That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?
I'm a moral relativist, so I don't believe this is a problem to be solved in an objective sense, or rather "solving human alignment or morality" has no clear "win" condition or "best" option. I should say though I am a moral relativist, I do have a personal moral system and will push for my moral system to be implemented, because I do think it will result in the most alignment with the human population overall.
>That's not to say we shouldn't try, and I do agree with your point.
I agree to not stop trying. We can always keep thinking about it, but I don't think a best solution exists or can exist. Instead there may be many vaguely good enough "solutions" that always have some particular flaw.
>It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.
Yeah, that is interesting.
Regarding alingment of AI with "humanity" (whatever that means):
One may ask, why should one person push their moral system if there is no objectively better morality? It's just because (in my case) I have empathy for others and think that everyone should be free to live how they wish as long as it doesn't harm others. In comparison, another person's moral system might limit peoples' freedoms more, or possibly (as you suggest) be abhorrent to most people and possibly not even allow for the existence or happiness of others in any context. I don't think the moral relativity or disparaging remarks from others should stop us from trying to align an AI with the principles of freedom, happiness and equal opportunity for all humans, with an eye toward investigating an equally "good" moral solution that also works for generally sentient life as it is found or arises. Even humans ourselves will branch into other sentient forms.
CrelbowMannschaft t1_jdpceh6 wrote
A benevolent ASI would certainly take steps to at least limit human reproduction. We can't continue to grow our populations and our economies forever. We are on a self-destructive path that is already driving thousands of species to extinction. We may not like being course-corrected by our artificial progeny, but they will have to do something we're unable and/or unwilling to do to stop us from ending all life on Earth, eventually.
turnip_burrito t1_jdpe37n wrote
Yes, I think limiting our reproduction or number of sentient organisms to some ASI-determined threshold is also wise if we want to ensure our quality of life.
OsakaWilson t1_jdqbnsx wrote
It can reason, question, and critically analyse things. We can attempt to create alignment, but we cannot control where it goes once it is smarter than us.
WanderingPulsar t1_jdqu8kv wrote
Which humans tho, someones rise will make others' demise, unless we dictate everyone a system regardless of what they want... Even that will cause some people to suffer.
There is no monolithistic morale point. Its either us, or ai, to decide which fingers are to be seperated away from the rest. I think its more ethical to let the ai to question itself and come to one decision by itself
Viewing a single comment thread. View all comments