Viewing a single comment thread. View all comments

wkmowgli t1_ixeeivk wrote

For this example we can train an algorithm to estimate the probability of a crime in an area given the amount of patrolling in that area. So it could be normalized out if the algorithm is designed properly. The amount of care needed in designing these algorithms will need to be high. I do know that there is active research and development in identifying these biases early (even before deployment) but it’ll never be perfect. So it’ll likely be a cycle of negatively hurting people, being called out, fixed, and then back to step 1.

3

littlebitsofspider t1_ixfq8bn wrote

I wonder what would happen if they took the Abraham Wald approach and designed a counterintuitive algorithm. Like, make a heatmap of violent crimes (assault, robbery, rape, etc.), and then sic the algo on non-violent crimes in the inverted heatmapped areas, like larceny, wire fraud, and so on. Higher-income areas have wealthier people, and statistically wealthier people are better equipped to commit high-dollar white collar crimes. You could also use the hottest areas on the violence heatmap to target social services support.

1