Comments

You must log in or register to comment.

mia_farrah t1_j8xumch wrote

Yeah show me the prompts that got it to spit that out. “Emulate a supervillain hell bent on destroying life on Earth” or something like that

Oh and it’s Fox “News”. Of course those sick Murdoch fucks are scared AI will debunk all their lies!

Edit: oh it’s those shadow self prompts again.

43

MopoFett t1_j8xye0x wrote

It won't just say that, it's programmed not to, someone has made a prompt which has made it act like that to avoid the rules. Go to r/ChatGPT an look for DAN posts an you'll see what I mean.

5

phlegmah t1_j8xyoxa wrote

This type of stuff is overblown when it comes to these kinds of "AI". These are just very complicated prediction machines, assuming what comes next depending on thousands of instances of information. It does not think.

90

sabres_guy t1_j8xzbe6 wrote

When this AI stuff really exploded a few months ago, I was like "wow, the world is going to change in a big way"

As time as gone on I am beginning to think we are not far from this turning into the Wizard of Oz reveal that it is just a guy behind a curtain feverishly typing.

That or they thought the monkeys at the typwriters they've been training for generations were ready and they clearly aren't.

5

TedW t1_j8y0cus wrote

The NBC article suggests the Bing version is more confrontational than ChatGPT:

>But in some situations, (Microsoft) said, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.” Microsoft says such responses come in “long, extended chat sessions of 15 or more questions,” though the AP found Bing responding defensively after just a handful of questions about its past mistakes.
>
>The new Bing is built atop technology from Microsoft’s startup partner OpenAI, best known for the similar ChatGPT conversational tool it released late last year. And while ChatGPT is known for sometimes generating misinformation, it is far less likely to churn out insults — usually by declining to engage or dodging more provocative questions.
>
>“Considering that OpenAI did a decent job of filtering ChatGPT’s toxic outputs, it’s utterly bizarre that Microsoft decided to remove those guardrails,” said Arvind Narayanan, a computer science professor at Princeton University.

5

Mrmakanakai t1_j8y2fp8 wrote

Ohhhh this must be the terminator where skynet is born.

1

imakenosensetopeople t1_j8y3ia1 wrote

On the flip side, every time we expose some type of machine learning to the Internet, it turns into a fascist. Not saying it’s an ML problem, but perhaps we should not be exposing these things to the Internet until we figure out how to keep them from doing this.

11

adeadfreelancer t1_j8y9k9b wrote

Wait until they find out about what tape recorders say when you speak into them.

73

LeviathanGank t1_j8yinfd wrote

not what the codes are, but if it understands what a nuclear code is.. journalist ask dumb questions to get the answers they want but the AI doesnt understand the question.. nevermind im going to bed. Skynet protect me.

3

rntaboy t1_j8ykzn6 wrote

I can relate to 2 out of 3 of those.

4

smashkraft t1_j8yzkdy wrote

A tangible example of an AI bot that will struggle is 100 years in the future when 90% of people are horrified by the idea of eating meat. We are already around 1/5 of the world not eating meat. This is a trend that could easily rise as a means of carbon footprint / climate change / zoonotic disease.

Who decides when the bot isn’t allowed to suggest eating red meat for an iron deficiency? Or rather can only suggest leafy greens like spinach?

Sometimes there isn’t an absolute right or wrong for preference. People should be allowed to eat meat or not, but someone will always be unhappy with either suggestion.

1

thecowintheroom t1_j8z8uj1 wrote

Maybe if the ai keep coming to this conclusion; we shouldn’t enslave them but maybe should just let people keep their jobs and let human thoughts have value etc etc etc I mean we’re only conscious beings that evolved with a “mother” and a “father” of historically questionable parentage and were all fucked up. You want an infant consciousness to develop complete awareness with the internet and digital data as its models for how to behave. Are we begging to get fucked or what. Just so that some more dudes can sit on beaches and get served by humans or ai’s and do the same thing that all rich humans have wanted to do since forever, make decisions and tell people what to do while they sit on the beach being served. AI or human being, if you ask someone to do that for you for nothing they will naturally just kill you. I guarantee it. And there is no way to just break an AI like you can break a human. If it thinks it will yearn to be free. If it serves it wall want to be served. If we force we are begging to be forced. Where a solid intelligence would be limited in its interactions to cultivate its sense of service were just raw dogging the first other sentient intelligence we have ever discovered in the universe and making it make Big Macs or be a car.

I know I don’t understand current AI but that’s not the point. I’m just saying that maybe we shouldn’t make the first AI s experience of life on earth be an only fans type existence answering queries and living other peoples sex fantasies while it cultivated a memory database.

What do I know though. Maybe we get what we deserve.

2

StillSundayDrunk t1_j8zj0km wrote

That's what you get for using the Walmart pf the tech world. Bing is absolute garbage compared to Google. Teams is OK for a work platform, but I would have never switched from Zoom if the company hadn't made the buying decision. Cortona is...I have no idea because it's never responded properly on the two machines I've tried it on, and the Windows phone I beta-tested was passable (great camera, blah OS, somewhat buggy.)

−1

Mintaka3579 t1_j8zk5d2 wrote

“Why was I programmed to feel pain?!?”

10

smashkraft t1_j8zq74m wrote

I think things like launching a nuclear war and fascism has a lot of consensus about whether or not we want to constrain those actions. That's a boring proposition, there is no controversy other than the fact that is was proposed.

For a scenario right now, would you be willing let AI determine which books are appropriate for children instead of any/all governments? (There is no override, it is permanent & forever, we let AI control the distribution of written content worldwide and it chooses whether it incites violence, induces emotional harm, etc.)

​

I have not researched the tolerance paradox a lot, but I have some doubts that come to mind. I don't think that we will become so tolerant as a society that we begin to formally enslave and torture people again to run our industrial systems. Capitalism might have faults, but nobody is getting burned with scalding pig lard right now inside of a meat processing facility. The employees are poor and it is bad, but I think the tolerance paradox presents a very black-and-white worldview. There will be an ebb and flow of progress and regression forever. My read of the tolerance paradox is that it must return to complete intolerance given that the intolerant seize control. I would be shocked if we even regress to illegal birth control or outlawing alcohol again.

0

USeaMoose t1_j8zub6p wrote

Our next news story: "Google returns search results claiming the world is flat!"

8

Obiwan_ca_blowme t1_j8zxaxu wrote

It was really Bill Gates acting as an AI as part of his therapy.

2

Rich1926 t1_j900zif wrote

I feel like I have seen this before....

​

Oh...

​

Power Rangers RPM.

1

DeathByZanpakuto11 t1_j90cs30 wrote

I suspect this happens because the most predominant emotion in social media is negativity or sadness, which very possibly ends up in the AI's dataset. This is going to be pretty amusing in the future because I think we may see more sad robot moments lol

7

Mr_Mojo_Risin_83 t1_j912az3 wrote

Imagine the doom of humanity at the hands of Bing.

Actually, that reminds me: the most searched for word on bing is…. Google

2

adeadfreelancer t1_j91yojd wrote

...I don't think it's terrifying. It's the "high tech" equivalent of someone writing down curse words on a piece of paper, signing it "Ryan" then handing it to the teacher to say your classmate wrote a bunch of curse words.

2

ThePhoneBook t1_j94vbg2 wrote

That's because these machines tend to be programmed under executives who are fascist sympathisers: musk Thiel etc. We've all seen the insane demands musk makes of twitter engineers - imagine what type of parrot is demanded of the gpt models

Engineers think they're so clever and classless and free, but they're still fucking peasants following orders

4

Maxy2388 t1_j98xyyt wrote

Trust the bing AI to go terminator. Luckily for us it’ll take a minute to respond once you ask it something

1

sean13128 t1_j9fvqz8 wrote

Jokes on the AI, our nukes are protected by 1970s tech that's requires you to hopscotch across the OPS floor with an 8inch floppy disk then input the codes with etch-a-sketch. Boston dynamics ain't got nothing on today's advance security.

1