Viewing a single comment thread. View all comments

glass_superman t1_ixeoz42 wrote

>And you're missing the point of the field if you're making the trivial observation that working out an explanation decreases the usefulness.

That's not what I said! I'm saying that limiting AI to only the explainable may decrease usefulness.

This is trivially true. Imagine that you have many AI programs, some of which you can interrogate and some that you can't. You need to pick the one to use. If you throw out the explainable ones, you have fewer tools. It's not a more useful situation.

>That is the point. We want to decrease it's usefulness and increase its accuracy in fields where the accuracy is paramount.

But accuracy isn't the same as explainability! A more accurate AI might be a less explainable one. Like a star poker player with good hunches vs a mediocre one with good reasoning.

We might decide that policing is too important to be unexplainable so we decide to limit ourselves to explainable AI and we put up with decreased utility of the AI in exchange. That's a totally reasonable choice. But don't tell me that it'll necessarily be more accurate.

> Saying "math reduces the usefulness by requiring an explanation for seemingly okay steps" is to miss the point of what mathematics is trying to do.

To continue the analogy, there are things in math that are always observed to be true yet we cannot prove it. And we might never be able to prove them. Yet we proceed as if they are true. We utilize that for which we have no explanation because utilizing it makes our lives better than waiting around for the proof that might never come.

Already math utilizes the unexplainable. Why not AI?

1