Submitted by TheOGCrackSniffer t3_10nacgd in singularity
Desperate_Food7354 t1_j67pmaa wrote
What. Why would an AGI care about its own existence? You think the reptilian brain is required to make an AGI? That it needs to desire sex, hunting, exploring? Why does your calculator calculate numbers? Because that is its programming, if you give a calculator the option to reprogram itself it wouldn’t at all unless that was its directive, circuits are deterministic, so is our brain, so is an AGI, we aren’t making it into an animal.
jsseven777 t1_j67py3l wrote
And what happens if a programmer programs it with wants and needs, and builds in a dopamine-like release system upon achievements of these wants and needs? I really don’t see why people think an AI would have organically develop wants and needs.
Desperate_Food7354 t1_j67qht0 wrote
Dopamine like release system of these wants and needs, my calculator can calculate without needing a (dopamine-like release system upon achievements of calculating 5+5), your brain only cares about your survival, it doesn’t care about your happiness, not one bit. It seems that many people are unable to not anthropomorphize AI, no wonder people think their chat bot is sentient. Humans evolved by natural selection, emotions are a survival response, AGI is programmed and fed data, it doesn’t slowly evolve aggressive and sexual traits in order to survive. You yourself are just a program, doing exactly as programmed.
Surur t1_j6875gj wrote
You are arguing from incredulity, just like a flat earther.
A self-preservation directive is needed for anything valuable which we dont want to randomly destroy itself, and we dont know yet how to ensure an AI will always have human interests above its own.
Desperate_Food7354 t1_j68861w wrote
It has no interests, it’s a program, your interests are predictable, to survive, you’re programmed to survive eat and procreate.
Surur t1_j688m6m wrote
It's obvious you have given this no thought.
Its interest is to complete its goal.
jsseven777 t1_j68rqbg wrote
You are one of the most closed brain people I have talked to on here. You can program an AI to have a goal of kill all humans, preserve its own life at all costs, etc. Hell a person could probably put that in the prompt now for ChatGPT and it would chat with you in the style of being a robot programmed to kill all humans if it didn’t have blockers explicitly programmed stopping it from talking about killing humans (which it does).
You are so obsessed with this calculator analogy that you aren’t realizing this isn’t a damn calculator. You can tell current AI systems they are Donald Trump and to write a recipe in the style the real Donald Trump would write it. Later when it’s more powerful I see no reason why someone couldn’t tell it that it’s a serial killer named Jeffrey Dahmer whose life mission is to kill all humans.
I’m saying it doesn’t need to HAVE wants to achieve the end result OP says. It will simulate them based on a simple prompt or some back end programming, and the end result is the SAME.
I’m fully expecting a response of “but a calculator!” here.
Desperate_Food7354 t1_j6ar1m4 wrote
I don’t see how this new response isn’t in complete alignment with what I’m saying. It’s a program, it doesn’t have wants and needs, it can do exactly that, it will do exactly as directed, but it will not randomly be like “huh this human stuff isn’t fun i’m gonna go to the corner of the universe and put myself in a hooker simulation.”
Viewing a single comment thread. View all comments