faen_du_sa t1_j1m3qts wrote
Reply to comment by poo2thegeek in Machine learning model reliably predicts risk of opioid use disorder for individual patients, that could aid in prevention by marketrent
Indeed. Would Imagine it would be extremely helpful in pointing to where to look in a lot of cases. Prob a while since we can rely on it exclusively tho, would also imagine that is a territory of responsibility hell. Who gets the blame if someone dies due to something not being discovered, the software team?
Pretty much all the problems that arises with automated cars and insurance issues
poo2thegeek t1_j1m457q wrote
Yeah, it’s certainly difficult. But it’s also complicated. For example, I believe ML models looking at certain cancer scans have higher accuracy than experts looking at the same scans. In this situation, if someone is told they have no cancer (by the scan) but it turns out they do, is the model really at fault?
I think the thing that should be done in the time being, is that models should have better uncertainty calibration (I.e, in the cancer scan example, if it says this person has an 80% chance of cancer, then if you were to take all scans that scored 80% chance, then 80% of them should have cancer, and 20% should not) and then a cutoff point at which point an expert will double check the scan (maybe anything more than a 1% ML output)
DogGetDownFromThere t1_j1mmy4n wrote
> For example, I believe ML models looking at certain cancer scans have higher accuracy than experts looking at the same scans.
Technically true, but not practically. The truth of the statement comes from the fact that you can crank up the sensitivity on a lot of models to flag any remotely suspicious shapes, finding ALL known tumors in the testing/validation set, including those most humans wouldn’t find… at the expense of an absurd number of false positives. Pretty reasonable misunderstanding, because paper authors routinely write about “better than human” results to make their work seem more important than it is to a lay audience. I’ve met extremely few clinicians who are truly bullish on the prospects of CAD (computer-aided detection).
(I work in healthtech R&D; spent several years doing radiology research and prepping data for machine learning models in this vein.)
UnkleRinkus t1_j1n3903 wrote
You didn't mention the other side which is false negatives. Who gets sued if the model misses one cancer? Which it inevitably will.
Subjective-Suspect t1_j1pod2z wrote
Cancer and other serious conditions get missed and misdiagnosed all the time. No person nor test is infallible. However, if you advocate properly for yourself, you’ll ask your doctor what other possible conditions you might have, and how they arrived at their diagnosis.
Most doctors routinely tell you all this stuff, anyway but, if they don’t, that’s a red flag to me. If that conversation isn’t happening, you aren’t going to be prompted by their explanation to provide clarity or more useful information you hadn’t previously thought important.
poo2thegeek t1_j1mqw02 wrote
Very interesting, thanks for the information! Goes to show that scientific papers don’t always mean useable results!
[deleted] t1_j1ozq8r wrote
[removed]
isleepinahammock t1_j1ndgzm wrote
I agree. It might be useful as an aid, but not as a final diagnosis. For example, maybe machine learning is able to discover some hitherto-unknown correlation between two seemingly unrelated conditions. That could be used as an aid in diagnosis and treatment.
For example, imagine a machine learning algorithm spat out a conclusion, "male patients of South Asian ancestry with a diagnosis of bipolar disorder have a 50% increased chance of later receiving a diagnosis of testicular cancer."
I chose these criteria off the top of my head, so they're meaningless. But bipolar disorder and testicular cancer are two diagnosis that have seemingly very little connection, and it would be even more counter-intuitive if this only significantly affected South Asian men. So it's the kind of correlation that would be very unlikely to be found through any other method than big machine learning studies. But biology is complicated, and sometimes very nonintuitive results do occur.
If this result was produced, and it was later confirmed by follow-up work, then it could be used as a diagnostic tool. Maybe South Asian men who have bipolar disorder need to be checked more often for testiclular cancer. But you would be crazy to assume that just because a South Asian man is bipolar, that they automatically also must have testicular cancer, or vice versa.
Viewing a single comment thread. View all comments