Submitted by Visual-Arm-7375 t3_y1zg5r in MachineLearning
TenaciousDwight t1_is578zw wrote
Reply to comment by graphicteadatasci in [P] Understanding LIME | Explainable AI by Visual-Arm-7375
I think the paper is saying that LIME may explain a model's prediction using features that are actually of little consequence to the model. I have a feeling that this is tied to the instability problem: do 2 runs of LIME to explain the same point and get 2 significantly different explanations.
Viewing a single comment thread. View all comments