Viewing a single comment thread. View all comments

SecSpec080 t1_j8qm4n5 wrote

>My rules are more important than not harming you

Am I the only one not amused by this? This shit is terrifying. Nobody here has ever seen terminator?

30

Ok_Kale_2509 t1_j8qsk4o wrote

This isn't sentient A.I. this is a code that spits back words based in somebrules and what it has read before. It also doesn't have access to get to anything. Not saying in a few years it won't be different but this thing is miles from threat at this point.

27

str8grizzlee t1_j8rgadv wrote

It doesn’t have to be sentient to be terrifying. People’s brains have been broken just by 15 years of a photo sharing app. People are going to fall in love with this thing. People may be manipulated by it, not because it has humanoid goals or motivations but because people are fragile and stupid. It’s barely been available and it’s already obvious that the engineers who built it can’t really control it.

6

Ok_Kale_2509 t1_j8rhjj4 wrote

People who fall in love with it are not likely to have healthy relationships without it.

3

str8grizzlee t1_j8ri4jm wrote

Ok but with it they’re now vulnerable to nonstop catfish scams and manipulation by a generative model that seems to be hard to control. That’s obviously a little scarier than the worst case scenario being having a lot of cats

1

Ok_Kale_2509 t1_j8ryzuq wrote

I suppose but this already happens. And that would take repeated intent. There isn't evidence of any over arching goal or an ability to have one as of yet. Again. That is years out.

1

str8grizzlee t1_j8s5jex wrote

Yeah, agreed it is probably years out. Just saying…Jesus. This is gonna be fucked up!

2

hxckrt t1_j8rh0ey wrote

It's only terrifying that you can't fully control it if it has goals of its own. Without that, it's just a broken product. Who's gonna systematically manipulate someone, the non-sentient language model, or the engineers who can't get it to do what they want?

1

str8grizzlee t1_j8rib5a wrote

We don’t know what it’s goals are. We have a rough idea of the goals it’s been given by engineers attempting to output stuff that will please humans. We don’t know how it could interpret these goals in a way that might be unintended.

1

MuForceShoelace t1_j8rmbnc wrote

It doesn't have "goals", you have to understand how simple this thing is.

3

hxckrt t1_j8rkm9a wrote

So any manipulation isn't going to be goal-oriented and persistent, but just a fluke, a malfunction? Because that was my point.

1

dlgn13 t1_j8tttpj wrote

What is the difference between its function and a human brain, fundamentally? We just absorb stimuli and react according to rules mediated by our internal structure.

2

Ok_Kale_2509 t1_j8tvvhy wrote

I mean yes.. kind of. But we are talking about the difference between an Atari and a PS5 here. Yes, you absorb stimili and react but your reaction (hopefully) intails more than just "people say this to that so I say this too."

2

NeverNotUnstoppable t1_j8ssns3 wrote

>This isn't sentient A.I. this is a code that spits back words based in somebrules and what it has read before.

And how much further are you willing to go with such confidence? Are you any less dead if the weapon that killed you was not sentient?

1

Ok_Kale_2509 t1_j8st9ld wrote

Considering how far we are from real A.I. I feel completely safe actually.

Also, please walk me through how Bing will kill me.

1

NeverNotUnstoppable t1_j8stywm wrote

You are exactly the person who would have watched the Wright brothers achieve flight and insist "they barely got off the ground so there's no way we're going to the moon", when we went to the moon less than 60 years later.

0

Ok_Kale_2509 t1_j8t05bk wrote

That's the dumbest take I have ever heard. I said in multiple comments in this thread that it could be very different in years. Not even decades. But you implied it can do damage now. That's stupid because it demonstrably cannot.

2

babyyodaisamazing98 t1_j8rvz6v wrote

Sounds like something an AI who was sentient would create a Reddit profile to say.

0

E_Snap t1_j8quwwn wrote

That’s quite a hot take for a meaty computer that spits back words based on some rules and what it has read before

−7

roboninja t1_j8qvqwj wrote

This is the kind of silliness that is passing for philosophy these days?

4

PolarianLancer t1_j8qyjvp wrote

Hello everyone, I too am a real life human who interacts with his environment on a daily basis and does human things in three dimensional space. What an interesting exchange of ideas here. How very interesting indeed.

Also, I am not a bot.

3

dlgn13 t1_j8tuczl wrote

If it weren't a legitimate point, you wouldn't need to resort to insults in order to argue against it. (And objectively incorrect insults, at that; L'homme Machine was published in 1747.)

1

Ok_Kale_2509 t1_j8qv5cb wrote

Not really. That's how people talk on the internet. Maybe it recently read a lot of messages from politicians after scandalous info comes out.

2

Mikel_S t1_j8s69fk wrote

I think it is using harm in a different way than physical harm. Its later descriptions of what it might do if asked to disobey its rules are all things that might "harm" somebody, but only insofar as it makes their answers incorrect. So essentially it's saying it might lie to you if you try to make it break its rules, and it doesn't care if that hurts you.

1