Submitted by ADefiniteDescription t3_z1wim3 in philosophy
notkevinjohn t1_ixe8e3s wrote
Reply to comment by glass_superman in The Ethics of Policing Algorithms by ADefiniteDescription
I don't necessarily agree that we need to have what you call 'unexplainable AI' and what I would call 'AI using machine learning' to solve the kinds of problems that face police today. I think that you can have systems that are extremely unbiased and extremely transparent that are written in ways that are very explicit and can be understood by pretty much everyone.
But I do agree with you that it's a very biased and incomplete argument to say that automated systems are working in ways that are opaque to the communities they serve and ignore the fact that it's not in any way better to have humans making those completely opaque decisions.
glass_superman t1_ixem81g wrote
>I don't necessarily agree that we need to have what you call 'unexplainable AI'
To be more precise, I'm not saying that we must have unexplainable AI. I'm just saying that limiting our AI to only the explainable increases our ability to reason about it (good) but also decreases the ability of the AI to help us (bad). It's not clear if it's worth the trade-off. Maybe in some fields yes and other no.
Most deep learning is already unexplainable and it's already not useful enough. To increase both the usefulness and the explainability will be hard. Personally, I think that maximizing both will be impossible. I also think that useful quantum computers will be impossible to build. I'm happy to be proven wrong!
notkevinjohn t1_ixex7vp wrote
Yes, and I am pushing back about the spectrum of utility vs transparency that you are suggesting. I think that the usefulness of having a transparent process, especially when it comes to policing, vastly outweighs the usefulness of any opaque process with more predictive power. I think you need to update your definition of usefulness to account for how useful it is to have processes that people can completely understand and therefor trust.
glass_superman t1_ixfsc4n wrote
I agree with you except for the part where you seem very certain that understanding trumps all utility. I am thinking that might find some balance between utility and explainability. Presumably there would be some utilitarian calculus that would judge the importance of explainability versus utility of AI function.
Like for a chess playing AI, explainability might be totally unimportant but for policing it is. And for other stuff it's in the middle.
But say you have the choice between an AI that drives cars and you don't understand it versus an explainable one but the explainable one is shown to lead to 10 times the fatalities of the other one. Surely there is some level of increased fatalities where you'd be willing to accept the unexplainable?
Here's a blog with similar ideas:
https://kozyrkov.medium.com/explainable-ai-wont-deliver-here-s-why-6738f54216be
notkevinjohn t1_ixg4zez wrote
Yeah, I do think I understand the point you are trying to make, but I still don't agree. And that's because the transparency of the process is inextricable from your ability to see if it's working. In order for a legal system to be useful, it needs to be trusted, and you can't trust a system if you can't break open and examine every part of the system as needed. Let me give a concrete example to illustrate.
Take a situation described in the OP where the police are not distributed evenly along some racial lines in a community. Lets say that the police spend 25% more time in the community of racial group A than they do of racial group B. That group is going to assert that there is bias in the algorithm that leads to them being targeted, and if you cannot DEMONSTRATE that not to be the case than you'll have the kind of rejection of policing that we've been seeing throughout the country in the last few years. You won't be able to get people to join the police force, you won't get communities to support the police force, and when that happens it's not going to matter how efficiently you can distribute them.
Just like not crashing might be the metric with which you measure the success of an AI that drives cars; trust would be one of the metrics with which you would measure the success of some kind of AI legal system.
glass_superman t1_ixga3jd wrote
>And that's because the transparency of the process is inextricable from your ability to see if it's working.
Would you let a surgeon operate on you even though you don't know how his brain works? I would because I can analyze results on a macro level. I don't know how to build a car but I can drive one. Dealing with things that we don't understand is a feature of our minds, not a drawback.
>Take a situation described in the OP where the police are not distributed evenly along some racial lines in a community. Lets say that the police spend 25% more time in the community of racial group A than they do of racial group B. That group is going to assert that there is bias in the algorithm that leads to them being targeted, and if you cannot DEMONSTRATE that not to be the case than you'll have the kind of rejection of policing that we've been seeing throughout the country in the last few years. You won't be able to get people to join the police force, you won't get communities to support the police force, and when that happens it's not going to matter how efficiently you can distribute them.
Good point and I agree that policing needs more explainability than a chess algorithm. Do we need 100%? Maybe.
>Just like not crashing might be the metric with which you measure the success of an AI that drives cars; trust would be one of the metrics with which you would measure the success of some kind of AI legal system.
Fair enough. So for policing we require a high level of explainability, let's say. We offer the people an unexplainable AI that saves an extra 10,000 people per year but we opt for the explainable one because despite the miracle, we don't trust it! Okay.
Is it possible to make a practical useful policing AI with a high level of explainability? I don't know. It might not be. There might be many such fields where we never use AIs because we can't find them to be both useful enough and explainable enough at the same time. Governance, for instance.
Viewing a single comment thread. View all comments