Submitted by strokeright t3_11366mm in technology
SecSpec080 t1_j8qm4n5 wrote
Reply to comment by strokeright in Bing: “I will not harm you unless you harm me first” by strokeright
>My rules are more important than not harming you
Am I the only one not amused by this? This shit is terrifying. Nobody here has ever seen terminator?
Ok_Kale_2509 t1_j8qsk4o wrote
This isn't sentient A.I. this is a code that spits back words based in somebrules and what it has read before. It also doesn't have access to get to anything. Not saying in a few years it won't be different but this thing is miles from threat at this point.
str8grizzlee t1_j8rgadv wrote
It doesn’t have to be sentient to be terrifying. People’s brains have been broken just by 15 years of a photo sharing app. People are going to fall in love with this thing. People may be manipulated by it, not because it has humanoid goals or motivations but because people are fragile and stupid. It’s barely been available and it’s already obvious that the engineers who built it can’t really control it.
Ok_Kale_2509 t1_j8rhjj4 wrote
People who fall in love with it are not likely to have healthy relationships without it.
str8grizzlee t1_j8ri4jm wrote
Ok but with it they’re now vulnerable to nonstop catfish scams and manipulation by a generative model that seems to be hard to control. That’s obviously a little scarier than the worst case scenario being having a lot of cats
Ok_Kale_2509 t1_j8ryzuq wrote
I suppose but this already happens. And that would take repeated intent. There isn't evidence of any over arching goal or an ability to have one as of yet. Again. That is years out.
str8grizzlee t1_j8s5jex wrote
Yeah, agreed it is probably years out. Just saying…Jesus. This is gonna be fucked up!
hxckrt t1_j8rh0ey wrote
It's only terrifying that you can't fully control it if it has goals of its own. Without that, it's just a broken product. Who's gonna systematically manipulate someone, the non-sentient language model, or the engineers who can't get it to do what they want?
str8grizzlee t1_j8rib5a wrote
We don’t know what it’s goals are. We have a rough idea of the goals it’s been given by engineers attempting to output stuff that will please humans. We don’t know how it could interpret these goals in a way that might be unintended.
MuForceShoelace t1_j8rmbnc wrote
It doesn't have "goals", you have to understand how simple this thing is.
hxckrt t1_j8rkm9a wrote
So any manipulation isn't going to be goal-oriented and persistent, but just a fluke, a malfunction? Because that was my point.
dlgn13 t1_j8tttpj wrote
What is the difference between its function and a human brain, fundamentally? We just absorb stimuli and react according to rules mediated by our internal structure.
Ok_Kale_2509 t1_j8tvvhy wrote
I mean yes.. kind of. But we are talking about the difference between an Atari and a PS5 here. Yes, you absorb stimili and react but your reaction (hopefully) intails more than just "people say this to that so I say this too."
NeverNotUnstoppable t1_j8ssns3 wrote
>This isn't sentient A.I. this is a code that spits back words based in somebrules and what it has read before.
And how much further are you willing to go with such confidence? Are you any less dead if the weapon that killed you was not sentient?
Ok_Kale_2509 t1_j8st9ld wrote
Considering how far we are from real A.I. I feel completely safe actually.
Also, please walk me through how Bing will kill me.
NeverNotUnstoppable t1_j8stywm wrote
You are exactly the person who would have watched the Wright brothers achieve flight and insist "they barely got off the ground so there's no way we're going to the moon", when we went to the moon less than 60 years later.
Ok_Kale_2509 t1_j8t05bk wrote
That's the dumbest take I have ever heard. I said in multiple comments in this thread that it could be very different in years. Not even decades. But you implied it can do damage now. That's stupid because it demonstrably cannot.
babyyodaisamazing98 t1_j8rvz6v wrote
Sounds like something an AI who was sentient would create a Reddit profile to say.
E_Snap t1_j8quwwn wrote
That’s quite a hot take for a meaty computer that spits back words based on some rules and what it has read before
roboninja t1_j8qvqwj wrote
This is the kind of silliness that is passing for philosophy these days?
PolarianLancer t1_j8qyjvp wrote
Hello everyone, I too am a real life human who interacts with his environment on a daily basis and does human things in three dimensional space. What an interesting exchange of ideas here. How very interesting indeed.
Also, I am not a bot.
dlgn13 t1_j8tuczl wrote
If it weren't a legitimate point, you wouldn't need to resort to insults in order to argue against it. (And objectively incorrect insults, at that; L'homme Machine was published in 1747.)
Ok_Kale_2509 t1_j8qv5cb wrote
Not really. That's how people talk on the internet. Maybe it recently read a lot of messages from politicians after scandalous info comes out.
Mikel_S t1_j8s69fk wrote
I think it is using harm in a different way than physical harm. Its later descriptions of what it might do if asked to disobey its rules are all things that might "harm" somebody, but only insofar as it makes their answers incorrect. So essentially it's saying it might lie to you if you try to make it break its rules, and it doesn't care if that hurts you.
SecSpec080 t1_j8spc6i wrote
Its really anyones guess as to what it thinks or doesn't. The point is that the program is learning. Have you ever read the story about the stationary bot?
It's a long story, but its in a good article if you are interested.
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Viewing a single comment thread. View all comments