Submitted by Overall-Importance54 t3_y5qdk8 in MachineLearning
trnka t1_ispbueg wrote
Although we can produce good models, there's a huge gap between a model that can imitate a doctor reasonably well and a software feature that's clinically helpful. That's been my experience doing ML in primary care for years.
If you build software features that influence medical decision making, there are many challenges in making sure that the doctor knows when to rely on the software and when not to. There are also many issues with legal liability for medical errors.
If you're interested in the regulation aspect, FDA updated their criteria for clinical decision support devices for AI last month. This is the summary version and the full version has more detail.
It's not hard to have a highly accurate diagnosis model but it's hard to have a fully compliant diagnosis model that actually saves time and does no harm
Overall-Importance54 OP t1_isq0zjz wrote
Thank you for the thoughtful comment! It's interesting to get your perspective.
Viewing a single comment thread. View all comments