Submitted by izumi3682 t3_xxelcu in Futurology
Few_Carpenter_9185 t1_irc2661 wrote
Reply to comment by JonU240Z in White House Releases Blueprint for Artificial Intelligence Bill of Rights by izumi3682
They're worried about AI applications for things like predictive policing or maybe determining credit scores, allocation of medical care, all sorts of stuff.
AI driven predictive policing could possibly be wonderful. Perhaps some patterns of smaller crimes or disturbances a human couldn't correlate could be seen by the AI of the system, the police are directed to patrol a certain area at a certain time, and some sort of serious crime or violence that the situation was headed towards never happens.
Someone didn't die, nobody was wounded. Court and prison resources aren't used, nor were hospital trauma centers. The police are seen as actually "being there when you need them"All very good things.
-OR-
The police being directed by the AI to a location, or perhaps have names provided by the AI system based on previous reports or criminal records go into a neighborhood. And while they don't have the predicted crime to charge anyone with, they decide to aggressively detain and question the people predicted to be involved, or arrest them "on something". Either from a misguided attempt to get them off the street to prevent the bigger crime, or because the prediction creates a sense of presumptive guilt that influences their actions.
In the past, instances of discrimination or racism always had an element of subjective human prejudice that could be pointed to as being unfair. Or that the justifications used to defend the discrimination or racism were at odds with the actual truth or facts in various ways. And those who wanted to continue with the discrimination or racism could be debated or opposed.
A scientific, mathematical, or computational system that is at least claimed to be objective, factual, and unbiased, can leave people, businesses, or governments feeling justified in their actions or policies, even if the overall outcome is arguably still discriminatory or racist.
Or maybe the system actually is objective and unbiased, or it would have been, but the data it's fed is not, either intentionally or unintentionally. Or the way the results that system produces are used is not.
And despite there being no evidence of actual self awareness or metacognition on the part of (weak)AI, systems that have elements of machine learning and other techniques, there can be undesirable or harmful outcomes.
izumi3682 OP t1_irczna9 wrote
I posted an article about this a few months back. Here is my submission statement included.
austacious t1_ircb1ny wrote
The issue with this is that removing bias based on demographics necessitates harming other demographics. Say you have a hospital whos patient demographics are 80% over the age of 65, 20% under the age of 65 (Substitute in whatever more controversial group identites you'd like). Any model will be biased and overperform on the group over 65 comparatively, there is just more data to learn from for that demographic. If you oversample data from the younger population to try to equalize outcomes between demographics, then you're training distribution will no longer be identically distributed with your testing distribution. While the model performance will improve for patients in the less represented demographics, overall performance will necessarilly decrease. Overall more people will be harmed because of the decreased efficacy of the model, but the members of one demographic in particular will not be disproportionately harmed.
It's a question of ethics. The utilitarian would say to keep train/test distributions i.i.d. no matter what, blind to demographics. At the same time, nobody should receive subpar care due to their race, age, whatever group you associate with.
Few_Carpenter_9185 t1_ircnx3s wrote
Really good points.
Viewing a single comment thread. View all comments