Viewing a single comment thread. View all comments

Sherlockian12 t1_ixe0hxe wrote

This misses the entire point of what explainable AI is. Asking humans to explain their intuition as a precondition for their intuition to be applicably valid is definitely limiting for humans. However, explainable AI isn't that we ask AI to explain itself. It's rather being able to exactly or with high probability pinpoint the exact dataset on which AI is basing it's prediction. This is definitely useless, and so limiting, when it comes to machine learning applications to, say, predicting what food you might like the best. It's however immensely important in areas like medical imaging, because we want to ensure that the input, on which AI is basing its decision, isn't some human-errored spot on the x-ray.

As such, it is for these fields that explainable AI is studied, where limitations of AI are far less significant than us being sure that AI isn't making a mistake. As such suggesting explainable AI is a dead-end is inaccurate, if not a mischaracterisation.

3

glass_superman t1_ixe1b9e wrote

I didn't mean that the AI should be able to explain itself. I meant that we should be able to dig in to the AI and find an explanation for how it worked.

I'm saying that that requiring either would limit AI and decrease it's usefulness.

Already we have models where it's too difficult to dig into them and figure out why a choice was made. As in, you can step through the math of a deep learning system to follow along with the math but you can't pinpoint the decision in there and more than you can root around in someone's brain to find the neuron responsible for a behavior.

1

Sherlockian12 t1_ixe3rh4 wrote

And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.

That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount. This is akin to the relationship of physics and math. In physics, we routinely make unjustified steps to make our models work. Then in math, we try to find a reasonable framework in which the unjustified steps are justified. Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.

1

glass_superman t1_ixeoz42 wrote

>And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.

That's not what I said! I'm saying that limiting AI to only the explainable may decrease usefulness.

This is trivially true. Imagine that you have many AI programs, some of which you can interrogate and some that you can't. You need to pick the one to use. If you throw out the explainable ones, you have fewer tools. It's not a more useful situation.

>That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount.

But accuracy isn't the same as explainability! A more accurate AI might be a less explainable one. Like a star poker player with good hunches vs a mediocre one with good reasoning.

We might decide that policing is too important to be unexplainable so we decide to limit ourselves to explainable AI and we put up with decreased utility of the AI in exchange. That's a totally reasonable choice. But don't tell me that it'll necessarily be more accurate.

> Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.

To continue the analogy, there are things in math that are always observed to be true yet we cannot prove it. And we might never be able to prove them. Yet we proceed as if they are true. We utilize that for which we have no explanation because utilizing it makes our lives better than waiting around for the proof that might never come.

Already math utilizes the unexplainable. Why not AI?

1