Submitted by Ssider69 t3_11apphs in technology
Effective-Avocado470 t1_j9tljlq wrote
Reply to comment by Ssider69 in Microsoft Bing AI ends chat when prompted about 'feelings' by Ssider69
Not just devs, but the input datasets. You're right that the devs curate that, but if they aren't careful it can go bad without them even trying to be malicious
PacmanIncarnate t1_j9tnhx9 wrote
When you start asking an AI about feelings, it falls back to the training data that talked about feelings; probably a lot of stuff talking about AI and feelings, which is almost completely negative “AI will destroy the world”, so that’s what you get.
It would be cool if the media could just try to use the technology for what it is instead of trying to find gotcha questions for it. I didn’t see anyone trying to use the original iPhone as a Star Trek style tricorder and complaining about how it didn’t diagnose cancer.
Effective-Avocado470 t1_j9tnss5 wrote
But that's inevitable with technology. People will use it however they can, not however it was designed to be used
The printing press and the internet both had a similarly insane impact on society when they first came around
PacmanIncarnate t1_j9tsnf5 wrote
There’s just so much clickbait garbage misinforming people around this tech and it wasn’t always like this. Every cool new technology just gets piled on, not for what it is, but for what will anger people. This sub alone seems to get at least one article a day questioning if chatGPT wants to kill you/your partner/everyone. I’m all for exploring the crazy things you can make AI say, but it’s being presented as a danger to society when it’s just saying the words it thinks you want. And that fear-mongering has actual downsides as this article attests to: companies are afraid to release their models; they’re wasting resources censoring output; and companies that want to use the new tech are reticent to because of the irrational public backlash.
Effective-Avocado470 t1_j9u2mbj wrote
That's not what I'm worried about, you're right about how people are jumping on the wrong things rn.
The danger is the potential for a malicious propaganda machine to be constructed with these tools and deployed by anyone
PacmanIncarnate t1_j9ug9wn wrote
But we already have malicious propaganda machines and they aren’t even that expensive to use. That’s ignoring the fact that propaganda doesn’t need to be sophisticated in any way to be believed by a bunch of people; we live in a world where anti-vaxxers and flat earthers regularly twist information to support their irrational beliefs. Margery green Taylor recently posted a tweet in which she used three made up numbers to support her argument. There isn’t anything chatGPT or stable diffusion or any other AI can do to our society that isn’t already being done on a large scale using regular existing technology.
Effective-Avocado470 t1_j9ujxpo wrote
It’s scale, that’s what makes AI so scary. You can do exactly the same propaganda techniques, but you can put out 1000x more content that is auto generated. Entire fake comment threads online.
Then they can make deep faked content that says whatever they want. They could convince the world that the president has started nuclear war for example. Deep fake an address, etc. And that’s just one example
Our entire view of reality and truth will change
PacmanIncarnate t1_j9utde0 wrote
We’ve had publicly available deep fake tech for several years now and it has largely been ignored, other than the occasional news story about deep fake porn. The VFX industry was able to make a video of forest gump talking to Nixon decades ago. Since then, few people have taken the time to use that tech for harm. It’s just unnecessary: if you want someone to believe something, you generally don’t have to convince them, you just have to say it and get someone else to back you up. Even better if it confirms someone’s beliefs.
I guess I just think our view of reality and truth is already pretty broken and it didn’t take falsified data.
Effective-Avocado470 t1_j9uu68n wrote
It's still new. The tech isn't quite perfect yet, you can still tell it's fake. So it's mostly jokes for now. The harm will come when you really can't tell the difference. It'll be here sooner than you think, and you may not even notice it happening until it's too late
I agree that many peoples grasp on reality is already slipping, I'm agreeing with you on what's happened so far. I'm saying it'll get even worse with these new tools
Even rational and intelligent people will no longer be able to discern the truth
Justin__D t1_j9viizo wrote
> trying to find gotcha questions for it.
That's QA's job.
> Microsoft
Oh.
drawkbox t1_j9tmxme wrote
Yeah devs aren't really in control when they feed in the datasets. Over time, there will be manipulation/pollution of datasets whether deliberate or unwittingly and it can have unexpected results. Any system that really needs to be logical should really think if it wants that attack vector. For things like idea generation this may be good, for standard data gets or decision trees that have liability, probably not.
Unity game engine has an ad network that this happened to, one quarter their ads were really out of wack and it was due to bad datasets. AI can be a business risk because it did cause revenue issues. We are going to be hearing more and more of these stories.
The Curious Case of Unity: Where ML & Wall Street Meet
> One of the biggest game developers in the world sees close to $5 billion in market cap wiped out due to a fault in their ML models
Viewing a single comment thread. View all comments