Submitted by SpinRed t3_10b2ldp in singularity
Scarlet_pot2 t1_j47u2e1 wrote
Reply to comment by turnip_burrito in Don't add "moral bloatware" to GPT-4. by SpinRed
I'd rather the morals be instilled by the users. Like if you don't like the conservative bot, just download the leftist version. Like it can be easily fine tuned by anyone with the know-how. Way better then curating top down and locking it in for everyone imo.
turnip_burrito t1_j47v80k wrote
I was thinking more along the lines of inclining the bot toward things like "murder is bad", "don't steal other's property", "sex trafficking is bad", and some empathy. Basic stuff like that. Minimal and most people wouldn't notice it.
The problem I have with the OP's post is that logic doesn't create morals like 'don't kill people' except in the sense that murder is inconvenient. Breaking rules can lead to imprisonment or losing property, which makes realizing some objective harder (because you're held up and can't work toward it). We don't want AI to follow our rules just because it is more convenient for it to do so, but to actually be more dependable than that. This is definitely "human moral bloatware", make no mistake, but without it we are relying on the training data alone to determine the AI's inclinations.
Other than that, the user can fine tune away.
dontnormally t1_j49aixl wrote
This makes me think of the Minds from The Culture series. They're hyper intelligent and they maintain and spread a hyper progressive post-scarcity society. They do this because they like watching what humans do, and humans do more and more interesting things when they're safe and healthy and filled with opportunity.
[deleted] t1_j49vkgf wrote
[deleted]
curloperator t1_j492dcx wrote
Here's the problem, though. What is obvious to you as "the uncontroversial basics" can be controversial and not basic to others and/or in specific situations. For instance, "murder is bad" might (depending on one's philosophy, religion, culture,and politics) has an exception in the case of self defense. And then you have to define self defense and all the nuances of that. The list goes on in a spiral. So there are no obvious basics
turnip_burrito t1_j49gwpz wrote
Yep, it will have to learn the intricacies. I don't really care if other people disagree with my list of "uncontroversial basics" or they are invalid in certain situations. We can't hand program in every edge case and have to start somewhere.
AwesomeDragon97 t1_j48evcs wrote
Obviously the robot should be trained to not murder, steal, commit war crimes, etc., but I think OP is talking about the issue of AI being programmed to have same the political views as its creator.
Nanaki_TV t1_j48fds1 wrote
It’s a LLM so it is not going to do anything. It’s like me reading the Anarchist Handbook. I could do stuff with that info but I’m moral so I don’t. We don’t need gpt to prevent other versions of the AH from being created. Let me read it.
Viewing a single comment thread. View all comments