Submitted by Overall-Importance54 t3_y5qdk8 in MachineLearning
111llI0__-__0Ill111 t1_isoa3uq wrote
Reply to comment by iqisoverrated in [D] What is the deal with breast cancer scans? by Overall-Importance54
The whole explainability thing is becoming ridiculous, because all these fancy techniques while explainable are still not going to be explainable to someone without math knowledge.
And even simple regressions have problems like Table 2 fallacy. Completely overrated
iqisoverrated t1_isov3ek wrote
You can do some stuff that helps people who aren't familiar with the math. E.g. you can color in the pixels that most prominently went into making a decision. If the 'relevant' pixels are nowehere near the lesion then that's a pretty good indication that the AI is telling BS.
Another idea that is being explored is that it will select some images from the training set that it thinks show a similar pathology (or not) and display those alongside.
Problem isn't so much that AI makes mistakes (anyone can forgive it that if the overall result is net positive). The main problem is that it makes different mistakes than humans...i.e. you're running the risk of overlooking something that a human would have easily spotted if you overrely on AI diagnstics.
Viewing a single comment thread. View all comments