Viewing a single comment thread. View all comments

LosTheRed t1_iwu1bq0 wrote

How can we be confident in the output of ML? Is there a way to trace the decision making, or is it entirely dependent on the training data?

2

should_go_work t1_ix06vav wrote

Depending on the model, "tracing" the output is certainly possible - for example, in decision trees. As far as confidence is concerned, you might find the recent work in conformal prediction interesting (basically predicting ranges of outputs at a specified confidence level). A really nice tutorial can be found here: https://people.eecs.berkeley.edu/~angelopoulos/publications/downloads/gentle_intro_conformal_dfuq.pdf.

1