Submitted by ethereal3xp t3_1237gfw in technology
Marchello_E t1_jdtpfq2 wrote
An AI is just an algorithm. It needs a tool to attack. With the US's (I think failed) logic of "guns don't kill people but people do" then who would be doing the attacking part?
fitzroy95 t1_jdtqa1l wrote
Attacks don't have to be physical.
An AI could destroy your reputation by publishing deepfakes online, or propaganda/slander against you, it would attack your credit rating via online transactions, it could wreck your life by taking over your work email and sending abusive emails to the boss...
Lots of ways that a malicious system could attack a person, or groups. Doesn't need to be particularly "intelligent" either.
and thats without looking at it taking control of your smart car and driving it off a cliff, etc
Marchello_E t1_jdw3ujt wrote
Yes, but the real question is: Who is to blame, who is thrown in jail, who get a fine?
Is Elon Musk to blame for the update? Is the bank to blame for allowing these online transactions.
The question is: is the person who instructed the AI to blame because the AI doesn't actually understand the implications of what it is doing, or is the actual AI to blame yet what happens to those who instructed the AI, or what if the AI is "glitching"?
__-___--- t1_jdv5aao wrote
People are the tool to attack.
A crazy robot that kills people isn't much of a threat because it's obvious and can be stopped.
But using people through religion and other political beliefs, that's extremely dangerous and impossible to stop.
We're going to need to teach people to personality double check everything the AI is doing, but we already know some people will lie and apply something they don't understand.
Critical thinking is our weakness.
Marchello_E t1_jdw4dqm wrote
Many people are already proxies; look at the storming of the US Capitol, or Brexit, Russian propaganda, or advertisement in general. Are those who get manipulated to blame, or the one doing the manipulation?
__-___--- t1_jdx2d3f wrote
True but the major difference is that you can tell who benefits from it.
It doesn't prevent some people to be in denial about it, but if you want to know, you'll find out.
An AI doesn't have human motivations or plans on a human scale. What do you look for? How do you know if it's advising against your own interests?
Viewing a single comment thread. View all comments