Submitted by ADefiniteDescription t3_z12uzd in philosophy
Tex-Rob t1_ix900ju wrote
Oh man, I love this topic. One huge gripe I have about these discussions is they often ignore the fact that the systems create this, not us. Example. If I ask, "Hey Google, what's the temperature?" and she responds, why would I say "thank you?" She doesn't hear it, nor respond, it is literally wasted breath. Chatbots and AI don't respond human like, so we treat them not human. They also don't have any kind of built in error correction like humans do. Dead silence isn't something a human usually responds with, or if it's unsure, a human won't just respond with what it misheard most of the time. If AI/chatbots were better, we wouldn't treat them so poorly. I remembered reading a Wired article in the late 90s about a phone service you could call and ask things of, before Siri, Google, etc. The tech was there, the implementation and development has been garbage. I think every day how those teams should be ashamed of themselves. A big part of new tech is faking it until you make it. There is a lot of stuff that they could just straight code in, to deal with common questions and replies, and they have endless data about requests people make that don't get filled to know what is popular, yet here we are. I just think these companies don't see a big incentive to it, so it's back burnered. We could have top notch AI that communicates well, today, if the market wanted it.
juliobesq t1_ix9it77 wrote
The question is "was the chabot built to make profit". If it's going to use the data/input metrics / user harvesting for profit, then I think skewing the data is fair game.
I do not condone being rude to telesales people. And a good AI should be programmed to deal with rudeness and meanness.
Ready to re-evaluate when skynet gains consciousness
Viewing a single comment thread. View all comments