Submitted by TheOGCrackSniffer t3_10nacgd in singularity
jsseven777 t1_j68rqbg wrote
Reply to comment by Desperate_Food7354 in I don't see why AGI would help us by TheOGCrackSniffer
You are one of the most closed brain people I have talked to on here. You can program an AI to have a goal of kill all humans, preserve its own life at all costs, etc. Hell a person could probably put that in the prompt now for ChatGPT and it would chat with you in the style of being a robot programmed to kill all humans if it didn’t have blockers explicitly programmed stopping it from talking about killing humans (which it does).
You are so obsessed with this calculator analogy that you aren’t realizing this isn’t a damn calculator. You can tell current AI systems they are Donald Trump and to write a recipe in the style the real Donald Trump would write it. Later when it’s more powerful I see no reason why someone couldn’t tell it that it’s a serial killer named Jeffrey Dahmer whose life mission is to kill all humans.
I’m saying it doesn’t need to HAVE wants to achieve the end result OP says. It will simulate them based on a simple prompt or some back end programming, and the end result is the SAME.
I’m fully expecting a response of “but a calculator!” here.
Desperate_Food7354 t1_j6ar1m4 wrote
I don’t see how this new response isn’t in complete alignment with what I’m saying. It’s a program, it doesn’t have wants and needs, it can do exactly that, it will do exactly as directed, but it will not randomly be like “huh this human stuff isn’t fun i’m gonna go to the corner of the universe and put myself in a hooker simulation.”
Viewing a single comment thread. View all comments