Submitted by ADefiniteDescription t3_z1wim3 in philosophy
glass_superman t1_ixe1b9e wrote
Reply to comment by Sherlockian12 in The Ethics of Policing Algorithms by ADefiniteDescription
I didn't mean that the AI should be able to explain itself. I meant that we should be able to dig in to the AI and find an explanation for how it worked.
I'm saying that that requiring either would limit AI and decrease it's usefulness.
Already we have models where it's too difficult to dig into them and figure out why a choice was made. As in, you can step through the math of a deep learning system to follow along with the math but you can't pinpoint the decision in there and more than you can root around in someone's brain to find the neuron responsible for a behavior.
Sherlockian12 t1_ixe3rh4 wrote
And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.
That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount. This is akin to the relationship of physics and math. In physics, we routinely make unjustified steps to make our models work. Then in math, we try to find a reasonable framework in which the unjustified steps are justified. Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.
glass_superman t1_ixeoz42 wrote
>And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.
That's not what I said! I'm saying that limiting AI to only the explainable may decrease usefulness.
This is trivially true. Imagine that you have many AI programs, some of which you can interrogate and some that you can't. You need to pick the one to use. If you throw out the explainable ones, you have fewer tools. It's not a more useful situation.
>That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount.
But accuracy isn't the same as explainability! A more accurate AI might be a less explainable one. Like a star poker player with good hunches vs a mediocre one with good reasoning.
We might decide that policing is too important to be unexplainable so we decide to limit ourselves to explainable AI and we put up with decreased utility of the AI in exchange. That's a totally reasonable choice. But don't tell me that it'll necessarily be more accurate.
> Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.
To continue the analogy, there are things in math that are always observed to be true yet we cannot prove it. And we might never be able to prove them. Yet we proceed as if they are true. We utilize that for which we have no explanation because utilizing it makes our lives better than waiting around for the proof that might never come.
Already math utilizes the unexplainable. Why not AI?
Viewing a single comment thread. View all comments