pharmaway123 t1_j1ntu65 wrote
Reply to comment by wrathtarw in Machine learning model reliably predicts risk of opioid use disorder for individual patients, that could aid in prevention by marketrent
can you elaborate a bit? How would biases in the nationally representative claims data (or the researchers) here make this model less useful?
wrathtarw t1_j1ocd3y wrote
The same bias that is present in the medical system is then programmed into the algorithm- the way machine learning works is that it essentially condenses information from the source and then uses it to determine the output. Garbage in garbage out…
If the source is flawed so too will the algorithm: https://developer.ibm.com/articles/machine-learning-and-bias/
And the source is flawed: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8344207/
pharmaway123 t1_j1on8ck wrote
Right, and I'm asking in this specific instance, given the rank ordered feature importance from the study, how would bias impact the results from this model, concretely.
wrathtarw t1_j1onilg wrote
Sorry- reddit karma doesn’t pay enough to do that analysis for you;
pharmaway123 t1_j1oqzaz wrote
Yeah, I figured it was just a nice sound bite without any actual thought behind it. Thanks for confirming.
Viewing a single comment thread. View all comments