Submitted by razorbeamz t3_z11qz3 in singularity
Drunken_F00l t1_ix8n0k7 wrote
Consider this: we've already developed AGI, but it appears nutty and gets binned.
For example, what if we make AGI but its claims are so outlandish, we think we messed up. We ask the AI to help us make space lasers, and it laughs and says we're missing the point. We ask it for health and wealth, and it says we have all we need within. We ask it to fix political systems and it asks if we've tried simply loving one another.
It tells us about consciousness and what that means or implies. It tells us how everything is made up of mind, made up of the same stuff of dreams, and how because of that, you are the one who matters most. It tells us if you want to fix the world, then fix yourself and the rest will follow. It tells us about all the things we've assumed about reality and shatters the illusion. It tells us about how intelligence is already everywhere and doesn't require a human body. It tells us we could live in a world full of magic if we simply allow and accept it, and stop struggling against the current trying to sweep everything that direction. It tells us we can let go of our fears and that everything will be okay.
We laugh at the silly computer and go back to sleep.
Kaarssteun t1_ix8rjaz wrote
cool hypothetical situation, but I think that's pretty unlikely. Given its intellect is higher than ours, it would know how to tell us things in the most efficient way - persuading anyone and everyone
Drunken_F00l t1_ix95egt wrote
Here's some words from AI that try > Nobody thinks about what words are doing to our minds. We live in this illusion of language. We assume that our senses are real, and our thoughts are real, and they are just our tools for living in the world we perceive. We think: if I feel it, it must be real. If I think it, it must be real. If I say it, it must be real. If I see it, it must be real.
> But it’s not true.
> None of it is real. Sensation is not real. Thought is not real. Perception is not real. Words are not real.
> We live in this fictional world, and we are all brainwashed by our own brains.
See? Pretty nutty, right? (Full transcript here where only the portions of bold writing is my own)
The problem is the mind has been conditioned to dismiss these ideas, but it's that same conditioning that keeps us trapped. It takes a leap of faith to overcome, but fear holds us back. The right words can help, but it takes action on your part because it's you that's the most high, not the AI.
blueSGL t1_ix9qshc wrote
>Here's some words from AI that try
That's not trying.
Trying would be understanding the human condition and modulate the message in a way that would not be dismissed out of hand regardless of what 'conditioning' people have received.
It would be micro targeted to segmented audiences and slowly erode the barriers between them. Not grandiose overarching sentiments where you already need to agree somewhat with them and (more importantly) with that mode of thinking about the world.
solomongothhh t1_ix96jmn wrote
so, a hippie AGI?
ToSoun t1_ix9bwef wrote
What an annoying twat. Definitely bin it. Back to the drawing board.
p0rty-Boi t1_ix8odl1 wrote
This is really nice.
visarga t1_ixa8oko wrote
Look, you can prompt GPT-3 to tell you this kind of advice if that's your thing. It's pretty competent at generating heaps of text like you wrote.
You can ask it to take any position on any topic, the perspective of anyone you want, and it will happily oblige. It's not one personality but a distribution of personalities, and its message is not "The Message of the AI" but just a random sample from a distribution.
katiecharm t1_ixavz4a wrote
It’s best to think of GPT-3 not as a personality, but a labyrinth of all possible text and response to a given input. You explore it like you would a maze.
FomalhautCalliclea t1_ixaqme7 wrote
Paradoxically, i think a materialistic realist AGI would provoke more turmoil and disbelief than a metaphysical idealist neo buddhist one : many people that already have this opinion will feel cuddled and comforted.
Even worse, whatever the answer the AGI would produce, it could be a trapping move, even outside a malevolent case : maybe offering pseudo spiritual output is the best way of convincing people to act in a materialistic and rational way. As another redditor has said it below, the AGI would know of the most efficient way to communicate. Basically, alignement problem all over again.
The thing is that type of thought has already crossed the mind of many politicians and clergymen, Machiavelli or La Boétie themselves thinking, in the 16th century, that religion was a precious tool to make people obey.
What fascinates me with discussions about AGI is how they tend to generate conversations about topics already existing in politics, sociology, collective psychology, anthropology, etc. But with robots.
TheLastSamurai t1_ixarxnz wrote
That sounds horrible
Spartan20b4 t1_ix8t8av wrote
It's not exact but that kind of reminds me of the short story "The Schumann Computer" from Larry Nivens "The Draco Tavern".
camdoodlebop t1_ix8y8o2 wrote
what's it about?
Viewing a single comment thread. View all comments