> For example, I believe ML models looking at certain cancer scans have higher accuracy than experts looking at the same scans.
Technically true, but not practically. The truth of the statement comes from the fact that you can crank up the sensitivity on a lot of models to flag any remotely suspicious shapes, finding ALL known tumors in the testing/validation set, including those most humans wouldn’t find… at the expense of an absurd number of false positives. Pretty reasonable misunderstanding, because paper authors routinely write about “better than human” results to make their work seem more important than it is to a lay audience. I’ve met extremely few clinicians who are truly bullish on the prospects of CAD (computer-aided detection).
(I work in healthtech R&D; spent several years doing radiology research and prepping data for machine learning models in this vein.)
DogGetDownFromThere t1_j1mmy4n wrote
Reply to comment by poo2thegeek in Machine learning model reliably predicts risk of opioid use disorder for individual patients, that could aid in prevention by marketrent
> For example, I believe ML models looking at certain cancer scans have higher accuracy than experts looking at the same scans.
Technically true, but not practically. The truth of the statement comes from the fact that you can crank up the sensitivity on a lot of models to flag any remotely suspicious shapes, finding ALL known tumors in the testing/validation set, including those most humans wouldn’t find… at the expense of an absurd number of false positives. Pretty reasonable misunderstanding, because paper authors routinely write about “better than human” results to make their work seem more important than it is to a lay audience. I’ve met extremely few clinicians who are truly bullish on the prospects of CAD (computer-aided detection).
(I work in healthtech R&D; spent several years doing radiology research and prepping data for machine learning models in this vein.)