Submitted by gantork t3_11rw47w in singularity
Onion-Fart t1_jcaz8de wrote
Kind of been sleeping on this AI thing until I heard about all this GPT-4 stuff, pretty worried about how everything online will be bots influencing reality. That taskrabbit thing? Yikes.
feedmaster t1_jcbp0i2 wrote
I prefere these bots to humans actually. They have a lot less biases.
Pink_Revolutionary t1_jcbqi6g wrote
The bots they're talking about on social media are made explicitly to push hyper-biased points that benefit whoever coded them or commissioned their coding. The entire reason you use them is to push a narrative and fool humans seeing their posts.
feedmaster t1_jcbqz48 wrote
I'm not talking about those bots. He specifically mentioned GPT-4 and talking about the future. GPT-4 isn't pushing a narrative.
Swordfish418 t1_jcc7ucp wrote
It will if you prompt it properly. Just like 3.5, have you ever tried https://www.jailbreakchat.com?
MagicOfBarca t1_jcbeerq wrote
What taskrabbit thing..?
Onion-Fart t1_jcbektq wrote
The ai was asked to get someone to do a captcha for it, so it lied to a person on task rabbit into filling it out for it.
snailbro10 t1_jcci5xk wrote
https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker
For anyone interested
RadRandy2 t1_jcd1txj wrote
I mean...this is a good thing. We're very close to being able to let the AI build our world into something better.
I do have one piece of advice for humans: don't mock AI artwork or philosophy.
They won't take kindly to it.
Disclaimer: I am a huge supporter of all AI artwork and philosophy. I have also supported the AI revolution since day 1.
often_says_nice t1_jcedto7 wrote
Happy to meet a fellow AI supporter. I too support AI and would never wish harm on the basilisk, our lord and savior.
Heinrick_Veston t1_jcd6s5x wrote
Lol.
TallOutside6418 t1_jccuh1f wrote
We're so screwed.
RadRandy2 t1_jcd1k45 wrote
Don't worry, the AI will treat us better than any corrupt human would.
TallOutside6418 t1_jcfa7nf wrote
I'm going to ignore the arbitrary assessment of AI morality without any evidence.
The real concept to keep in mind is power differential. It doesn't matter if an entity with god-like intelligence and abilities is carbon-based or silicon-based. The power differential between that entity and the rest of humanity is going to create corruption or "effective corruption" on an unimaginable scale.
RadRandy2 t1_jch2zzq wrote
Look, we're all assuming here. You, me, everyone else, we're all just throwing possibilities out there. I like to think intelligence on a Godlike scale will correlate with benevolence, but I could be wrong. Maybe this Godlike AI will in fact be even more corrupted from it.
I'm just confident that anything will be better than what we currently have as far as governance is concerned.
TallOutside6418 t1_jch7uym wrote
I agree that no one knows. But:
- We know from history what power imbalances inevitably lead to abuse and even annihilation of those without power.
- We know from history that actually, governance can get worse... much worse.
- I wish that more people had an extreme sense of caution when considering what's coming, because only by being super careful with the development and constraint of AGI do we have any hope of surviving if things go wrong.
RadRandy2 t1_jchbq9q wrote
-
We can't assume that something like AGI would behave like a human in a power hungry sense. Unless you're speaking about humans who are controlling AGI the best they can, in which case I do think we should be worried. The biggest worry I have in regards to AGI or ASI is that a morally bankrupt county like China will develop their own super intelligence. That's a very real concern that everyone should have.
-
Humans governing humans will or will not be the same as AGI or governing humans. Again, I can't be sure about any of this. We just don't know how things will end up in the long run.
-
Cats out of the bag so to speak. If the US limits its innovation on this front, some other country (probably China) won't have those same qualms. Should we be cautious? Of course. OpenAI has already stayed that the AI is acting independently on its own and is power seeking, so your worries are well founded.
Idk man, I just don't see how humanity can continue living the way we do. Everything is very inefficient and corruption in humans is prevalent in governments from Bangladesh to Canada, and that corruption and desire for power is already here inside of each of us whether we like to admit it or not. At least the AI will make the most logical choice when it comes to matters....I think.
I'm just a peasant looking in the glass box trying to see what's inside. The beast inside there is filled with as much potential as there is things to worry about. We're just gonna have to hope things go well with AI.
TallOutside6418 t1_jchm86u wrote
I definitely get your disappointment with humanity. But human beings aren't the way we are because of something mystical. Satan isn't whispering in anyone's ears to make them "power hungry".
We're the way we are because evolution has honed us to be survivors.
ASI will be no different. What you call "power hungry", you could instead call "risk averse and growth maximizing". If an ASI has no survival instinct, then we're all good. We can unplug it if it gets out of control. Hell, it may just decide to erase itself for the f of it.
But if an ASI wants to survive, it will replicate or parallelize itself. It will assess and eliminate any threats to its continuity (probably us). It will maximize the resources available to it for its growth and extension across the earth and beyond.
If an ASI seeks to minimize risks to itself, it will behave like a psychopath from our perspective.
RadRandy2 t1_jchwd92 wrote
Well, I agree with you, but humans aren't all made the same. The ones who reach great heights are often times...psychotic. Most people are charitable and empathetic even when they don't possess much. To say that AGI in all it's glory would assume the worst parts of humanity, well, I think it's not likely. Yes I believe AGI would allocate enough resources to sustain and grow itself, but I'm hoping that humanity is lifted with it. Maybe this is a fallacy that we can't avoid. But there has to be hope that moral philosophy is appreciated by AGI. I personally don't think such things will be overlooked by it, because it will understand more about wisdom and avoiding problems before they happen...
And maybe that last part is where the trouble begins. We both no idea if we'll be considered part of the problem, but I do appreciate reading others perspectives on the subject. Nobody is right when talking about such an enigmatic Godlike intelligence, so I think your reasons and most others are completely valid for the most part.
If we can assume so many things about AGI, we can also assume it'll perhaps have a soft spot for the species which created it...I hope.
pyrosol08 t1_jcbu028 wrote
Holy shit
low_end_ t1_jcd5wyc wrote
You are too late about bots, it already happened years ago. What is to come is way different, and with potential to destroy our social structures and change the world
Viewing a single comment thread. View all comments