elehman839 t1_jd8gnav wrote
To me, "AI turns evil" scenarios seem a ways out. The more-probable scenario in the near term that concerns me is nasty PEOPLE repurposing AI to nasty ends. There are vile people who make malware and ransomware. There are people who've lived wretched lives, are angry about that, and just want to inflict damage wherever they can. These folks may make up 0.001% of the population, but that's still a lot of people worldwide.
So how are these folks going to use AI to cause as much damage as possible? If they had control of an AI, they could give it the darkest possible intentions. Maybe something like, "befriend people online and over a period of months then gradually start undermining their sense of self worth and encourage them to commit suicide". Or "relentlessly make calm, rational-sounding arguments in many online forums under many identities that <some population group> is evil and should be killed".
As long as AI is super compute-intensive, there will be check on this behavior. If you're running on BigCorp's cloud service, they can terminate your service. But when decent AI can run on personally-owned hardware, I think we're almost certain to see horrific stuff like this. This may not end the world, but it will be quite unpleasant.
Traveshamockery t1_jd8s4zo wrote
>But when decent AI can run on personally-owned hardware, I think we're almost certain to see horrific stuff like this.
As of March 13th, a Stanford team claims to run a ChatGPT 3.5-esque model called Alpaca 7B which runs on a $600 home computer.
https://crfm.stanford.edu/2023/03/13/alpaca.html
Then on March 18th, someone claims to get Alpaca 7B running on their Google Pixel phone.
https://twitter.com/rupeshsreeraman/status/1637124688290742276
blueSGL t1_jdafkx8 wrote
https://github.com/ggerganov/llama.cpp [CPU loading with comparatively low memory requirements (LLaMA 7b running on phones and Raspberry Pi 4) - no fancy front end yet]
https://github.com/oobabooga/text-generation-webui [GPU loading with a nice front end with multiple chat and memory options]
/r/LocalLLaMA
Wh00pty t1_jd8pysq wrote
Flip side could be that we'll get much better, ai-driven auto moderation. Good guys will get AI too.
zerobeat t1_jd90zdq wrote
> nasty PEOPLE repurposing AI to nasty ends
So...the owners, then?
Interesting_Mouse730 t1_jd9bprp wrote
Agreed. The imminent direct danger of AI is bad actors, setting aside whatever chaos widespread adoption will cause the economy and labor market.
That said, I don't like how quick so much of the media and the tech industry is to dismiss the spookier sci-fi apocalypse scenarios. They may be a ways out, but we don't know what is or isn't possible. The most damaging consequences may come from something initially benign or seemingly harmless. We just don't know yet, but that doesn't mean we should stick our head in the sand.
Mercurionio t1_jdbvsz1 wrote
We know exactly everything that can and Will happen.
There are 2 scenarios:
-
Single gestalt consciousness of AI, once it starts to create it's own tasks. At this moment tech will either stop advancing, coz AI will understand the usefulness of existence, or it will do it's tasks without stopping. Humans will be an obstacle, either to be ignored completely or to get rid off.
-
Before gestalt, people will use AI as a tool to control the power over others. Through propaganda, fake world, fake artists and so on. This scenario is already happening in China.
In both cases, freaks, that are working on it, are responsible for the caused chaos. Because they should have been understanding that even before starting the work. Also, just look on the ClosedAI. They are the embodiment of everything bad, that could happen with the AI development.
Viewing a single comment thread. View all comments