Submitted by Ssider69 t3_11apphs in technology
Ssider69 OP t1_j9t9hqv wrote
>The chatbot generated a response to an Associated Press reporter that compared them to Hitler, and displayed another response to a New York Times columnist that said, "You're not happily married" and "Actually, you're in love with me."
Welcome to the age of AI stalkers
Effective-Avocado470 t1_j9tkfhu wrote
It's scary, but of course the AI isn't actually aware. It is just mimicking humans. So behavior from Twitter, reality TV and shit movies are going to be what AIs parrot back to us when pushed on similar points
drawkbox t1_j9tl1kv wrote
Yeah it isn't being human. There really is no such thing if you aren't human. We assign human like qualities to things, and when there are enough, it seems alive. Basically we are Calvin and AI is Hobbes, there is lots of imagination there... even how we assign life to Calvin and Hobbes I just mentioned.
Being human is sort of an irrationality or a uniqueness that AI probably doesn't want to be, it would be too biased. So assigning human qualities to AI is really people seeing what they wanna see. You can already see people seeing bias in it, usually tuned to their bias.
Though in the end we will have search engines that search many AI datasets that could be seen as "individuals". These "individual" AIs could also research with one another like a GAN. There will probably be some interesting things happening on polluting or manipulation of other datasets from other dataset "individuals". Almost like a real person that meets another person and it changes their thinking or lives forever. Some things are immutable, one way and read only after write.
Effective-Avocado470 t1_j9tlca0 wrote
Don't get me wrong, I believe AI may well eventually become conscious, much like Data in star trek, but we are still a long way from that.
The scary thing is these current AI will synthesize the worst of us into a powerful weapon of ideas and messaging. Combine it with deepfakes and no one will know what the truth is anymore
drawkbox t1_j9tm7pq wrote
Yeah humans really aren't ready for the manipulation aspect. It won't really be conscious but it will have so many responses/manipulation points it will feel conscious and magic like it is reading minds.
Our evolutionary responses and reactions are being played already.
It was "if it bleeds it leads" but now is "enragement is engagement". The enragement engagement algorithms are already being tuned from supposed neutral algorithms but they already have bias and pump different content to achieve engagement.
With social media being real time and somewhat of a tabloid, the games with fakes and misinformation will be immense.
We might already be seeing videos of events, protests, war for instance that are completely fake and it is slipping the Turing test. That is the scary thing, we won't really know when it has gone past that. Even just for the pranks, humans will use this like everything. You almost can't trust it already.
addiktion t1_j9u7vr0 wrote
I wonder how long until some country is lured into a false flag attack from this. It's gonna get scary not just because of what the AI will be capable of, but because it means zero trust from anything you see, hear, or are exposed too once this happens in the wake of the aftermath; which means more authoritarian methods will need to be imposed to ensure authentic communication methods.
danielravennest t1_j9unv51 wrote
> authoritarian methods will need to be imposed to ensure authentic communication methods.
No. That's actually one use for blockchains. Record an event. Derive a hash value from the recording. Post the hash value to a blockchain, which time-stamps it. If several people record the same event from different viewpoints independently and have the same timestamps, you can be pretty sure it was a real event.
"People" can be a municipal streetcam, and security cameras on either side of a street, assuming the buildings have different owners. If they all match, it was a real event.
Ssider69 OP t1_j9tlduw wrote
True of course. It says more about what goes on in the mind of developers I think
Effective-Avocado470 t1_j9tljlq wrote
Not just devs, but the input datasets. You're right that the devs curate that, but if they aren't careful it can go bad without them even trying to be malicious
PacmanIncarnate t1_j9tnhx9 wrote
When you start asking an AI about feelings, it falls back to the training data that talked about feelings; probably a lot of stuff talking about AI and feelings, which is almost completely negative “AI will destroy the world”, so that’s what you get.
It would be cool if the media could just try to use the technology for what it is instead of trying to find gotcha questions for it. I didn’t see anyone trying to use the original iPhone as a Star Trek style tricorder and complaining about how it didn’t diagnose cancer.
Effective-Avocado470 t1_j9tnss5 wrote
But that's inevitable with technology. People will use it however they can, not however it was designed to be used
The printing press and the internet both had a similarly insane impact on society when they first came around
PacmanIncarnate t1_j9tsnf5 wrote
There’s just so much clickbait garbage misinforming people around this tech and it wasn’t always like this. Every cool new technology just gets piled on, not for what it is, but for what will anger people. This sub alone seems to get at least one article a day questioning if chatGPT wants to kill you/your partner/everyone. I’m all for exploring the crazy things you can make AI say, but it’s being presented as a danger to society when it’s just saying the words it thinks you want. And that fear-mongering has actual downsides as this article attests to: companies are afraid to release their models; they’re wasting resources censoring output; and companies that want to use the new tech are reticent to because of the irrational public backlash.
Effective-Avocado470 t1_j9u2mbj wrote
That's not what I'm worried about, you're right about how people are jumping on the wrong things rn.
The danger is the potential for a malicious propaganda machine to be constructed with these tools and deployed by anyone
PacmanIncarnate t1_j9ug9wn wrote
But we already have malicious propaganda machines and they aren’t even that expensive to use. That’s ignoring the fact that propaganda doesn’t need to be sophisticated in any way to be believed by a bunch of people; we live in a world where anti-vaxxers and flat earthers regularly twist information to support their irrational beliefs. Margery green Taylor recently posted a tweet in which she used three made up numbers to support her argument. There isn’t anything chatGPT or stable diffusion or any other AI can do to our society that isn’t already being done on a large scale using regular existing technology.
Effective-Avocado470 t1_j9ujxpo wrote
It’s scale, that’s what makes AI so scary. You can do exactly the same propaganda techniques, but you can put out 1000x more content that is auto generated. Entire fake comment threads online.
Then they can make deep faked content that says whatever they want. They could convince the world that the president has started nuclear war for example. Deep fake an address, etc. And that’s just one example
Our entire view of reality and truth will change
PacmanIncarnate t1_j9utde0 wrote
We’ve had publicly available deep fake tech for several years now and it has largely been ignored, other than the occasional news story about deep fake porn. The VFX industry was able to make a video of forest gump talking to Nixon decades ago. Since then, few people have taken the time to use that tech for harm. It’s just unnecessary: if you want someone to believe something, you generally don’t have to convince them, you just have to say it and get someone else to back you up. Even better if it confirms someone’s beliefs.
I guess I just think our view of reality and truth is already pretty broken and it didn’t take falsified data.
Effective-Avocado470 t1_j9uu68n wrote
It's still new. The tech isn't quite perfect yet, you can still tell it's fake. So it's mostly jokes for now. The harm will come when you really can't tell the difference. It'll be here sooner than you think, and you may not even notice it happening until it's too late
I agree that many peoples grasp on reality is already slipping, I'm agreeing with you on what's happened so far. I'm saying it'll get even worse with these new tools
Even rational and intelligent people will no longer be able to discern the truth
Justin__D t1_j9viizo wrote
> trying to find gotcha questions for it.
That's QA's job.
> Microsoft
Oh.
drawkbox t1_j9tmxme wrote
Yeah devs aren't really in control when they feed in the datasets. Over time, there will be manipulation/pollution of datasets whether deliberate or unwittingly and it can have unexpected results. Any system that really needs to be logical should really think if it wants that attack vector. For things like idea generation this may be good, for standard data gets or decision trees that have liability, probably not.
Unity game engine has an ad network that this happened to, one quarter their ads were really out of wack and it was due to bad datasets. AI can be a business risk because it did cause revenue issues. We are going to be hearing more and more of these stories.
The Curious Case of Unity: Where ML & Wall Street Meet
> One of the biggest game developers in the world sees close to $5 billion in market cap wiped out due to a fault in their ML models
Kaekru t1_j9u4o4e wrote
Literally has nothing to do with developers, do you think every reply and context is put in there by someone?
If you prompt an ai chat bot to start talking about dark things SURPRISE it will start talking about dark things, you’re baiting the ai into that topic and then act surprised when it interacts back with you. This is exactly what all these “concerning” articles and news about ai are doing.
Ssider69 OP t1_j9u96pu wrote
Literally the developers are the one designing the system. Anything it does is in them ..their failure to recognize a problem is the same as directly causing it
I used literally because that, in gen z speak, means "no I really mean it"
Kaekru t1_j9ubdjz wrote
That's not how fucking AI works my guy.
AI chatbots are not sentient, it will take the topic you are giving it and parrot and repeat it back to you with it's data on past conversations about it.
If you prompt the AI to talk about death, it is forced to talk about death, and will give you a reply about death if you start to prompt the AI to talk about self awareness, it will give you replies about self awareness.
That is how it works, simple, you can get and manipulate a chatbot to say pretty much anything you want given the correct triggers. It doesn't mean it's sentient or that it's replies where put in there or that it was pre programmed by a depressed developer.
Ssider69 OP t1_j9ucc5f wrote
Ai chatbots aren't sentient??? Holy fuck... you're kidding me....
Iow...no shit
My point..."my guy" is that any system that routinely fucks up as much as AI chat is the result of designers not thoroughly testing. And if it's not ready for prime time . .don't release it
Or is that too direct a concept ..."my guy"
AI chat is just another example of dressing up mounds of processing power to do something that seems cool but is not only flawed but useless.
It kind of sums up the industry really, and in fact most of the IT business right now
Kaekru t1_j9ucvr1 wrote
>is that any system that routinely fucks up as much as AI chat is the result of designers not thoroughly testing
Any system that learns from experience will be fucked up if people fuck with it.
The same way if you raise a child to be a fucked up person they will become a fucked up adult.
You don't seem to understand jack shit about machine learning processes. A "fool proof" chat bot wouldn't be a good chat bot at all, since it wouldn't be able to operate outside its pre-determined replies and topics.
businessboyz t1_j9v3n69 wrote
>And if it’s not ready for prime time . .don’t release it
Good thing they didn’t and this has been an open waitlist beta so that the developers can gather real world experience and update the product accordingly.
You can’t ever anticipate all the ways that users will use your product and design a fail-proof piece of software. That’s why products go through many stages of testing and release with wider and more public audiences each iteration.
Viewing a single comment thread. View all comments