Viewing a single comment thread. View all comments

iqisoverrated t1_isov3ek wrote

You can do some stuff that helps people who aren't familiar with the math. E.g. you can color in the pixels that most prominently went into making a decision. If the 'relevant' pixels are nowehere near the lesion then that's a pretty good indication that the AI is telling BS.

Another idea that is being explored is that it will select some images from the training set that it thinks show a similar pathology (or not) and display those alongside.

Problem isn't so much that AI makes mistakes (anyone can forgive it that if the overall result is net positive). The main problem is that it makes different mistakes than humans...i.e. you're running the risk of overlooking something that a human would have easily spotted if you overrely on AI diagnstics.

2