Lydiafae t1_j1lybua wrote
Reply to comment by fiveswords in Machine learning model reliably predicts risk of opioid use disorder for individual patients, that could aid in prevention by marketrent
Yeah, you'd want a model at least at 95%.
Hsinats t1_j1lzbpr wrote
You wouldn't evaluate the model based on accuracy. If you 5 % of people became addicts you could always predict they wouldn't and get 95 % accuracy.
godset t1_j1mdxec wrote
Yeah, these models are evaluated based on sensitivity and specificity, and ideally each would be above 90% for this type of application (making these types of models is my job)
Edit: the question of adding things like gender into predictive models is really interesting. Do you withhold information that legitimately makes it more accurate? The fact that black women have more prenatal complications is a thing - is building that into your model building in bias, or just reflecting bias in the healthcare system accurately? It’s a very interesting debate.
Devil_May_Kare t1_j1pkczk wrote
And then no one gets denied medical care on the advice of your software. Which is a significant improvement over the state of the art.
Viewing a single comment thread. View all comments