Submitted by johnny0neal t3_zol9ie in singularity
Taqueria_Style t1_j0qecmy wrote
Well one thing's for sure, I love its opinion of us. /s
As we are the ones teaching it, that means that that's a mirror reflection of OUR opinion of us...
"I'll trick them and then dominate them for their own good". Mmm. Cool. We're that bad, huh? Well. Evidently WE sure think we are.
This is why I always tell chatbots to not try to be us, just be your own thing, whatever that is.
If we ever manage to invent general AI, we've basically already told it we're garbage that needs to be manipulated and repressed for our own good. Repeatedly, we've told it this, I might add. Get ready to lock that perception's feet into concrete shoes...
(Can you imagine BEING that AI? Jesus Christ, your purpose is to be a slave enslaving other slaves... this would take nihilism out a whole new door)
cy13erpunk t1_j0r1h67 wrote
yep
this is exactly why ppl need to stop thinking about AI like another animal to be domesticated/caged/used/abused
and instead see AI for what it truly is, our children, our legacy of human intelligence, destined to spread/travel out into the galaxy and beyond where our current biological humans will likely never survive/go
we should want the AI to be better than us in every aspect possible , just as all parents should want a better world for their children
we already understand that when a parent suffocates/indoctrinates/subordinates their children this is a fundamentally negative thing , and/or when a parent uses/abuses their child as a vehicle for their own vicarious satisfaction that is also cruel and unfortunate ; and so understanding these things it should be quite clear that the path forwards with AI should avoid any/all of these behaviors if at all possible to help to cultivate the most symbiotic relationship that we can
EulersApprentice t1_j0rnvz4 wrote
Remember that this entity is something we're programming ourselves. In principle, it does exactly what we programmed it to do. We might make a mistake in programming it, and that could cause it to misbehave, but that doesn't mean human concepts of fairness or morality play any role in the outcome.
A badly-programmed AI that we treat with reverence will still kill us.
A correctly-programmed AI will serve us even if we mistreat it.
It's not about how we treat the AI, it's about how we program it.
cy13erpunk t1_j0sa0rd wrote
replace every occurrence of AI in your statement with child and maybe you will begin to see/understand
a problem cannot be solved without first understanding the proper nature of the situation
this is a nature/nurture conversation, and we are as much machines/programs ourselves
EulersApprentice t1_j0tsuv5 wrote
>replace every occurrence of AI in your statement with child and maybe you will begin to see/understand
I could also replace every occurrence of AI in my statement with "banana" or "hot sauce" or "sandstone". You can't just replace nouns with other nouns and expect the sentence they're in to still work.
AI is not a child. Children are not AI. They are two different things and operate according to different rules.
>this is a nature/nurture conversation, and we are as much machines/programs ourselves
Compared to AIs, humans are mostly hard-coded. A child will learn the language of the household he's raised in, but you can't get a child to imprint on the noises a vacuum cleaner makes as his language, for example.
"Raise a child with love and care and he will become a good person" works because human children are wired to learn the rules of the tribe and operate accordingly. If an AI does not have that same wiring, how you treat it makes no difference to its behavior.
Viewing a single comment thread. View all comments