Aromatic_Highlight27 OP t1_jegvgen wrote
Reply to comment by Pallidus127 in How soon will people be confortable with being treated only by machines, as opposed to AI-assisted human medical doctors? by Aromatic_Highlight27
Let's say an AI diagnosed you with cancer (hopefully no). Would you take chemo or surgery without consulting a human doctor? And if the doctor disagrees, who would you trust?
Pallidus127 t1_jegxed9 wrote
Personally, I’d want a biopsy to confirm, but after that I’d follow the AI prescribed course of treatment.
If a human doctor disagreed, I’d want them to chat and figure out why. The theoretical “medical model“ is going to know A LOT more than the human doctor, but maybe the human doctor has made a creative leap to some conclusion. So let them talk and find out why they disagree.
Aromatic_Highlight27 OP t1_jegy36p wrote
Do you really have this kind of trust in the CURRENT systems? I'm not thinking of knowledge here, but of reasoning capabilities. Current systems do have a lot of limitations and make mistakes, don't they? Of course a human expert can also get wrong, but are we really at the point where a machine error is less likely, and less likely catastrophic? Keep in mind I'm comparing pure AI vs AI-assisted doctors.
Also, since you say you'd already trust a medical AI, can you please tell me which one is already powerful enough to gain such trust from you?
Pallidus127 t1_jeh0ca0 wrote
Current systems? Maybe GPT-4. I don’t know how much medical data is in it’s training dataset though. I’d rather have a version of chatgpt fine tuned on terabytes of medical data.
I think it’s not so much a huge amount of trust in the AI doctor as it is distrust in the U.S. medical system. Doctors only seem to care about getting you in and out as fast as possible. I don’t think any doctor is giving any real thought to my maladies. So why not have ChatGPT-4 order some tests and interpret the results? I doubt it could do any worse than the overworked doctor.
Viewing a single comment thread. View all comments