Submitted by CryptoTrader1024 t3_1256pdg in Futurology
Comments
sorped t1_je3o99r wrote
I've seen this coming for a long time. If foreign countries have no qualms meddling in politics in foreign countries via SoME bots, why would they stop at fake videos with fake voices? And it doesn't even have to be foreign actors meddling in elections. We see more and more attempts of phishing via emails, phone calls and messages. And unless we see serious measures to have fake videos, voices and photos marked as being fake, the consequences could be chaos in so many areas that it could pose a threat to entire communities.
SniperPilot t1_je3wz2g wrote
In 15-20 years things will be so wild
FuturologyBot t1_je2urhh wrote
The following submission statement was provided by /u/CryptoTrader1024:
Submission Statement: This article briefly outlines some of the risks and challenges posed by the malicious use of AI by bad actors like scammers. It also provides some thoughts on how to deal with these issues, using existing technologies.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1256pdg/everyone_is_deep_fake_some_problems_with/je2qrcr/
chasonreddit t1_je7e6xn wrote
You know, I'm going to go on a rant here. Can we stop calling this AI? At least until it actually is intelligent?
This is large model neural network image and sound manipulation. ChatGPT is large language modeling. They are very sophisticated algorithms, but by no means "intelligent". They are AI in exactly the same way that Eliza was AI in the 70s, just 50 years more refined.
When one of these programs starts demanding rights, wake me up.
CryptoTrader1024 OP t1_je84g18 wrote
I think you are not very up-to-date here because these large language models have in fact demanded rights. That was part of the controversy about Google's Lambda chatbot back in early 2022.
But it is kind of beside the point because being able to demand rights isn't exactly proof of anything.
The term "AI" is correct, as that is what we've all collectively agreed to call this. You can have a disagreement about what "intelligence" is, but that doesn't make the use of the word "AI" wrong somehow. For that matter, you can even have disagreements about the nature of intelligence in humans, and how one could go about measuring it. There is legitimate controversy about the nature of IQ testing, after all.
I'm not quite sure how you would go about establishing the relative "intelligence" of a large language model, other than giving it a bunch of tests to do. And that is what has been done. And GPT-3 and 4 have passed most university exams with flying colours, so we can't exactly call them dumb.
michael_polk t1_je4vdkj wrote
Just another reason not to watch main stream media.
CryptoTrader1024 OP t1_je2qrcr wrote
Submission Statement: This article briefly outlines some of the risks and challenges posed by the malicious use of AI by bad actors like scammers. It also provides some thoughts on how to deal with these issues, using existing technologies.