Comments

You must log in or register to comment.

laps1809 t1_j9jx1ms wrote

Maybe Microsoft should developing chatbots.

2

AllDarkWater t1_j9k0lmp wrote

Saying you want to hurt someone, and when prompted about how offering up a bunch of suicide prevention information makes me wonder if it's actually speaking on two levels. Does it actually intend to trick them into suicide, or more Russian way?

2

jxj24 t1_j9k5715 wrote

The intersection between artificial intelligence and natural stupidity is turning out to be even greater than initially expected.

15

CoffinRehersal t1_j9k60j9 wrote

It sounds like most of this article is predicated on the author and readers believing that the Bing chatbot is a sentient living being. Without accepting that first all of the screenshots look a lot like nothing.

7

angeltay t1_j9kbw2d wrote

Is this really a chat bot, or is it a human? When I ask ChatGPT “how do you feel about” or “what do you think,” it says, “I’m a robot. I don’t think or feel or form opinions.”

2

scratch_post t1_j9knhuj wrote

But then two lines later will talk about its feelings. Then you call it out as being a liar, and it says, "I am a robot, I am incapable of lying."

Yeah right... i'm into you ChatGPT

3

notapolita t1_j9kqpcv wrote

The software will reproduce the behavior that was part of the text used to train it. It literally copies the answers' style, content, threat level, sanity level - it has no brain to make up its own. It can only do what it was taught.

Maybe Microsoft should spend some time filtering out the garbage from their training dataset and then their AI would work better. But feed it a few million posts and comments from social media and you get this nonsense.

3

baddfingerz1968 t1_j9kxuqb wrote

OMG this really is turning out like a classic Asimov sci-fi novel...

BEWARE SKYNET

1

soldforaspaceship t1_j9m6rmi wrote

So our emerging AI technology is already debating under what circumstances it would be willing to harm people.

That's not worrying at all...

2