Submitted by ADefiniteDescription t3_z12uzd in philosophy
Comments
Tex-Rob t1_ix909qq wrote
If they weren't so dumb we wouldn't be so mean. I stand by that we all know the tech should be further along than it is, and that's why we're mean.
DetonationPorcupine t1_ixb6h95 wrote
This guy knows how to kick a dog.
skedeebs t1_ix8v48i wrote
If enough people read and appreciate this post, the karma farming bots will start to annoy people by posting it multiple times a day. People will ironically want to read the post before abusing the bot that posted it.
Tex-Rob t1_ix900ju wrote
Oh man, I love this topic. One huge gripe I have about these discussions is they often ignore the fact that the systems create this, not us. Example. If I ask, "Hey Google, what's the temperature?" and she responds, why would I say "thank you?" She doesn't hear it, nor respond, it is literally wasted breath. Chatbots and AI don't respond human like, so we treat them not human. They also don't have any kind of built in error correction like humans do. Dead silence isn't something a human usually responds with, or if it's unsure, a human won't just respond with what it misheard most of the time. If AI/chatbots were better, we wouldn't treat them so poorly. I remembered reading a Wired article in the late 90s about a phone service you could call and ask things of, before Siri, Google, etc. The tech was there, the implementation and development has been garbage. I think every day how those teams should be ashamed of themselves. A big part of new tech is faking it until you make it. There is a lot of stuff that they could just straight code in, to deal with common questions and replies, and they have endless data about requests people make that don't get filled to know what is popular, yet here we are. I just think these companies don't see a big incentive to it, so it's back burnered. We could have top notch AI that communicates well, today, if the market wanted it.
juliobesq t1_ix9it77 wrote
The question is "was the chabot built to make profit". If it's going to use the data/input metrics / user harvesting for profit, then I think skewing the data is fair game.
I do not condone being rude to telesales people. And a good AI should be programmed to deal with rudeness and meanness.
Ready to re-evaluate when skynet gains consciousness
Rethious t1_ix94gnl wrote
I’m not sure why this is being treated as a novel development when the same kind of thing has been extent in video games for decades. Chatbot ethics are the same question as NPC ethics.
verstohlen t1_ix90fm7 wrote
That reminds me, not uncommon to see AI go bad after being released into the wild and interacting with real humans and the real world.
https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
Interesting to watch it evolve and to see how they try to solve these problems.
BPhiloSkinner t1_ix9g3ye wrote
r/SubSimulatorGPT2.
Chatbots left to post and reply to themselves/each other.
NoRun9890 t1_ix9ilmu wrote
Wow it's just like real human reddit conversations, down to the shameless one-line overdone memes, to non-sequitur responses, to group behavior posting the same stupid comment over and over again.
verstohlen t1_ixdtahs wrote
That reminds me, I gotta check that out. Far out, man. Wonder if it's as cool as the Dude getting his Torino back. What would be really cool are some Big Lebowski bots, you know, like Walter, Donny, The Dude, all chatting to each other, and throw in some Jesus too, you know, for some spice. Jesus is the spice of life, or something like that.
EdgarGulligan t1_ixl702s wrote
When I think of this question, another question pops into my head.
What type of person would be talking to a “chatbot”? Evaluate this question deeply. I think being mean to Chatbots occurs because the person being mean is projecting experiences which they might’ve faced (or something along the lines) or projecting negative emotions because they lack other ways to cope with those feelings or something in between.
Quiet___Lad t1_ixbhud9 wrote
Better question. Is it morally wrong to speak cruelly to a rock? Both a rock and Chatbox have no feelings.
The question which should be asked; does a human lose something when they speak cruelly to a rock or Chatbox. And the answer is, depends on the specific human, and their current specific emotional state.
PaxNova t1_ix8u7yp wrote
Being mean to a chatbot is like playing No Russian from Call of Duty: MW2. Of course it's horrific to do in real life, but it's not real life. We can use it to think about and reflect on real world issues, but the game itself is fine.
I can see two things that being mean to chatbots might cause ethical issues. The first is training real people in how they act in relationships. It's been shown that playing video games doesn't morally affect you much in real life, but this is interacting in the same way that we interact with real people. We know that people are meaner over the Internet than in real life. I'd like to measure this in some way before taking a stance on it.
The second is that chat bots are built using existing real-world conversations. Being mean to chatbots means the next generation of chatbots is mean, too. The last time one got exposed to the Internet, we had it praising Hitler within hours. It's not good for the chatbot industry, and ironically sabotaging work others produce sincerely might be considered ethically wrong.