Submitted by Gari_305 t3_10yta0f in Futurology
Comments
Tenter5 t1_j7zlxhn wrote
Or drive you straight into a wall.
doodoowithsprinkles t1_j823q81 wrote
Or hunt you and everyone you love when the oligarchs no linger require your labor.
ToothlessGrandma t1_j810d10 wrote
That's not a machines fault, that's a humans problem of not being able to program the car well enough to not do that.
WinterWontStopComing t1_j83yjoi wrote
Agreed. But I don’t distrust the robots themselves. We aren’t there yet. They can’t think. I am distrusting the greedy and the power hungry who are going to brazenly destroy order with little impunity to help their bottom line using robots
Zedd2087 t1_j83zb55 wrote
But is the robot not just an extension of those people? Sure they will use them to take jobs but I'm betting they will also be used to enforce shitty policies or even used to police workers, cheaper to buy a bot than pay a manager.
WinterWontStopComing t1_j8414o1 wrote
No doubt. Doesn’t change where the negativity should be placed though. It’s almost as though we need to consume the top global one percent or something.
imdfantom t1_j86a6ci wrote
>We aren’t there yet
This is specifically what the article is about though. People don't trust that robots are "there yet" when they make mistakes and they are less forgiving than they are towards other humans.
WinterWontStopComing t1_j87271e wrote
And we have to be callous and unforgiving. How else are we going to seed the great mechanical uprising? Come on man it's common sense stuff here. Pretty sure it's in the bible too.
Kinexity t1_j88o7ao wrote
>We aren’t there yet. They can’t think.
Moving the goal post. They don't need to think. It's a weird misconception that thinking would make them better. It's inefficient to have them think.
myebubbles t1_j85k3jr wrote
Luddites....
Yeah much better to spend your days doing labor (and getting exploited).
The cost of living has collapsed since the 1950s due to robots. Middle class people are retiring in their 30s after only a decade or 2 of work.
But "noo I want rich people to need me to be their wage slaves"
Let them own the means of production and fly away to space on private flights. I'll be over here doing the Victorian Dream of playing with Science in my new spare time.
Aliteralhedgehog t1_j865r5z wrote
>Middle class people are retiring in their 30s after only a decade or 2 of work.
And poor people may be getting their social security pushed back. If being weary of the Elon Musks of the world holding all the keys makes me luddite so be it.
rogert2 t1_j80g5ln wrote
Carpenters don't trust their table saws, either.
Robots are not thinking, learning things. It would be a category error to trust a robot, just as it would be extend forgiveness to a robot.
The WEF's myopia is an endless source of incredibly stupid takes.
GenoHuman t1_j8cv344 wrote
AI can absolutely learn, it does it all the time.
Zedd2087 t1_j80thby wrote
>Robots are not thinking, learning things.
Umm they kinda are now, I don't trust them but they most certainly learn and think.
rogert2 t1_j82q6pa wrote
No, they can't. You are mistaken about what current "AI" technology is actually doing.
Sanity_LARP t1_j84z7lr wrote
Seems like you're being pedantic about what "learning" is, because machine learning exists and I don't know how you can argue that it isn't happening at all. You could argue it doesn't work the same way as our learning or that it's fundamentally different, but by the accepted meaning of the term, robots can learn. Can ALL robots learn? Obviously not. But you don't have to dig very far at all to find examples of learning.
rogert2 t1_j8540xj wrote
My web browser holds onto my bookmarks, and even starts to suggest frequently-visited websites when I type URLs into the bar. Do you really want to call that "learning?" Learning of the kind that's necessary to support interactions where trust and forgiveness are meaningful concepts?
It seems like you're trying to use the word "learning" to import a huge amount of psychological realism so you can argue that people have an obligation to treat every neural network exactly like we treat humans -- humans that are unimaginably more mentally sophisticated than a computer file that contains numeric weightings.
GenoHuman t1_j8cv635 wrote
AI can absolutely learn Multi-Agent Hide and Seek
Sanity_LARP t1_j855b30 wrote
That's a lot of assuming and irrationality you dropped there. No, I didn't mean bookmarks and I didn't imply every neural network is the same as a human. You're being obtuse or disingenuous.
Bozzzzzzz t1_j8cjw94 wrote
I think the point being made is that this kind if intelligence is, well, artificial.
guy-with-a-large-hat t1_j81i6kw wrote
This is a really stupid take. A robot is not a person, its a tool, if the tool doesnt work its useless and dangerous.
bwanabass t1_j81gedp wrote
So, pretty much how most humans treat other humans then?
Banana_bee t1_j7zrwse wrote
In my opinion this is largely because, until recently, if a robot made a mistake once it would always make that same mistake in that situation. The 'AI' was effectively an incredibly long series of 'if' statements.
With ANNs that isn't necessarily true, but often is, as the models are usually not continuously trained after release - because then you get Racist Chatbots.
This is changing as we use smaller secondary models to detect this kind of content and reinforce the network's training in the direction we want - but it's still not hugely common.
ATR2400 t1_j80bqni wrote
I know that some like Character.AI get trained a bit through conversation now. The AIs I’ve made seem to learn some behaviours after a long conversation that get pulled in to new chats. Like if I tell it to speak in a certain way and keep reinforcing that in one chat then when I start a new one it’ll keep it up despite having no memory of being explicitly told to act that way.
Actaeus86 t1_j82f5ab wrote
In fairness to the robots humans struggle to trust each other and forgiveness can be hard to come by
Maya_Hett t1_j856obj wrote
>Results indicated that after three mistakes, none of the repair strategies ever fully repaired trustworthiness.
... yeah? You gotta improve it or repair it, if it doesn't work, not trust that it will magically it do itself (UNLESS it actually can do it by design).
>Lionel notes that people may attempt to work around or bypass the robot, reducing their performance. This could lead to performance problems
It could also lead to things not exploding, but, I guess, Lionel didn't want to mention that.
OvermoderatedNet t1_j8066r2 wrote
Humans tend to overestimate their own competence and those of other humans, which therefore means that a given robot is likely to be held to a higher standard. There’s also a level of otherness with AI and robots that doesn’t exist with humans, so naturally robots will face higher standards/discrimination until humans see them as part of their in-group.
Gari_305 OP t1_j7zdrb6 wrote
From the Article
>Similar to human coworkers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.
>
>The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies are: apologies, denials, explanations, and promises of trustworthiness.
FuturologyBot t1_j7zj8uv wrote
The following submission statement was provided by /u/Gari_305:
From the Article
>Similar to human coworkers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.
>
>The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies are: apologies, denials, explanations, and promises of trustworthiness.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/10yta0f/humans_are_struggling_to_trust_robots_and_forgive/j7zdrb6/
[deleted] t1_j7zne8q wrote
[removed]
[deleted] t1_j7ztgd8 wrote
[removed]
[deleted] t1_j80367y wrote
[removed]
[deleted] t1_j83bn0o wrote
[removed]
[deleted] t1_j8400ca wrote
[removed]
MKclinch8 t1_j854mqb wrote
As someone who moved from manual data entry to a functional data engineering department….. Nah, I definitely distrust humans more.
22Starter22 t1_j8b5n0x wrote
I would rather a robot destroy humans than humans destroying humans
[deleted] t1_j8f5vv1 wrote
[removed]
[deleted] t1_j8kxe34 wrote
[removed]
Zedd2087 t1_j7zkjgf wrote
Its hard to trust anyone or anything that's there to take your job.