Submitted by Ssider69 t3_11apphs in technology
PacmanIncarnate t1_j9tnhx9 wrote
Reply to comment by Effective-Avocado470 in Microsoft Bing AI ends chat when prompted about 'feelings' by Ssider69
When you start asking an AI about feelings, it falls back to the training data that talked about feelings; probably a lot of stuff talking about AI and feelings, which is almost completely negative “AI will destroy the world”, so that’s what you get.
It would be cool if the media could just try to use the technology for what it is instead of trying to find gotcha questions for it. I didn’t see anyone trying to use the original iPhone as a Star Trek style tricorder and complaining about how it didn’t diagnose cancer.
Effective-Avocado470 t1_j9tnss5 wrote
But that's inevitable with technology. People will use it however they can, not however it was designed to be used
The printing press and the internet both had a similarly insane impact on society when they first came around
PacmanIncarnate t1_j9tsnf5 wrote
There’s just so much clickbait garbage misinforming people around this tech and it wasn’t always like this. Every cool new technology just gets piled on, not for what it is, but for what will anger people. This sub alone seems to get at least one article a day questioning if chatGPT wants to kill you/your partner/everyone. I’m all for exploring the crazy things you can make AI say, but it’s being presented as a danger to society when it’s just saying the words it thinks you want. And that fear-mongering has actual downsides as this article attests to: companies are afraid to release their models; they’re wasting resources censoring output; and companies that want to use the new tech are reticent to because of the irrational public backlash.
Effective-Avocado470 t1_j9u2mbj wrote
That's not what I'm worried about, you're right about how people are jumping on the wrong things rn.
The danger is the potential for a malicious propaganda machine to be constructed with these tools and deployed by anyone
PacmanIncarnate t1_j9ug9wn wrote
But we already have malicious propaganda machines and they aren’t even that expensive to use. That’s ignoring the fact that propaganda doesn’t need to be sophisticated in any way to be believed by a bunch of people; we live in a world where anti-vaxxers and flat earthers regularly twist information to support their irrational beliefs. Margery green Taylor recently posted a tweet in which she used three made up numbers to support her argument. There isn’t anything chatGPT or stable diffusion or any other AI can do to our society that isn’t already being done on a large scale using regular existing technology.
Effective-Avocado470 t1_j9ujxpo wrote
It’s scale, that’s what makes AI so scary. You can do exactly the same propaganda techniques, but you can put out 1000x more content that is auto generated. Entire fake comment threads online.
Then they can make deep faked content that says whatever they want. They could convince the world that the president has started nuclear war for example. Deep fake an address, etc. And that’s just one example
Our entire view of reality and truth will change
PacmanIncarnate t1_j9utde0 wrote
We’ve had publicly available deep fake tech for several years now and it has largely been ignored, other than the occasional news story about deep fake porn. The VFX industry was able to make a video of forest gump talking to Nixon decades ago. Since then, few people have taken the time to use that tech for harm. It’s just unnecessary: if you want someone to believe something, you generally don’t have to convince them, you just have to say it and get someone else to back you up. Even better if it confirms someone’s beliefs.
I guess I just think our view of reality and truth is already pretty broken and it didn’t take falsified data.
Effective-Avocado470 t1_j9uu68n wrote
It's still new. The tech isn't quite perfect yet, you can still tell it's fake. So it's mostly jokes for now. The harm will come when you really can't tell the difference. It'll be here sooner than you think, and you may not even notice it happening until it's too late
I agree that many peoples grasp on reality is already slipping, I'm agreeing with you on what's happened so far. I'm saying it'll get even worse with these new tools
Even rational and intelligent people will no longer be able to discern the truth
Justin__D t1_j9viizo wrote
> trying to find gotcha questions for it.
That's QA's job.
> Microsoft
Oh.
Viewing a single comment thread. View all comments