Submitted by ADefiniteDescription t3_z1wim3 in philosophy
phanta_rei t1_ixdowtx wrote
Reply to comment by d4em in The Ethics of Policing Algorithms by ADefiniteDescription
It reminds of the algorithm that handed longer sentences to minorities. If I am not mistaken, it took factors like income and spit out a value that determines whether the defendant will recidivate or not. The result was that minorities were disproportionately affected by it…
d4em t1_ixdroz5 wrote
Oh yeah, this is a whole rabbit hole. There's also algorithms that are being trained by people to identify subjective values, such as "niceness." These are notoriously biased as well, as biased, in fact, as the people who train them. But unlike those people, the opinion of the AI won't be changed by actually getting to know the person it's judging. They give 100% confident, biased, results.
Or the chatbots that interpret written language and earlier conversations to simulate conversation. One of them was unleashed on the internet and was praising Hitler within 3 hours. Another, scientific model designed to skim research papers to give summaries to scientists, answered that vaccines both can and cannot cause autism.
These don't bother me though. They're so obviously broken that no one will think to genuinely rely on them. What bothers me is the idea of this type of tech becoming advanced enough to sound coherent and reliable, because the same issues disrupting the reliability of the AI tech we have today will still be present, it's just the limitation of the technology. Yet even today we have people hailing the computer as our moral savior that's supposed to end untruth and uncertainty. If the tech gets a facelift, I believe many people will falsely place their trust in a machine that just cannot do what is being asked of it, but tries it's damndest to make it look like it can.
glass_superman t1_ixdua43 wrote
As an example:
Meta just a couple days ago took offline it's scientific paper generation machine because it would happily provide you a real-sounding scientific paper on the history of bears in space.
https://futurism.com/the-byte/facebook-takes-down-galactica-ai
killertreatdev t1_ixec5vt wrote
Few things have described me better.
elmonoenano t1_ixeltji wrote
In the US the big problem is that b/c of the legacy of redlining and segregation, a lot of these algorithms use zip codes which has turned out to just be a proxy for race. So the pre trial release were basically making the decision based on race and age, but b/c no one in the court system actually knew how they worked, no one challenged it.
Cathy O'Neil's got a bunch of good work on it. She had a book a few years ago called Weapons of Math Destruction.
[deleted] t1_ixdvd03 wrote
[removed]
Viewing a single comment thread. View all comments