Comments

You must log in or register to comment.

Pallidus127 t1_jegu8dr wrote

Is be pretty comfortable now tbh. Actually, I’d be more comfortable with a AI general practitioner than a human one. It’s going to be more competent than the doctors i can afford.

When it comes to surgery, I’d prefer not to be in the alpha or beta waves though.

5

Aromatic_Highlight27 OP t1_jegvgen wrote

Let's say an AI diagnosed you with cancer (hopefully no). Would you take chemo or surgery without consulting a human doctor? And if the doctor disagrees, who would you trust?

0

Pallidus127 t1_jegxed9 wrote

Personally, I’d want a biopsy to confirm, but after that I’d follow the AI prescribed course of treatment.

If a human doctor disagreed, I’d want them to chat and figure out why. The theoretical “medical model“ is going to know A LOT more than the human doctor, but maybe the human doctor has made a creative leap to some conclusion. So let them talk and find out why they disagree.

5

Aromatic_Highlight27 OP t1_jegy36p wrote

Do you really have this kind of trust in the CURRENT systems? I'm not thinking of knowledge here, but of reasoning capabilities. Current systems do have a lot of limitations and make mistakes, don't they? Of course a human expert can also get wrong, but are we really at the point where a machine error is less likely, and less likely catastrophic? Keep in mind I'm comparing pure AI vs AI-assisted doctors.

Also, since you say you'd already trust a medical AI, can you please tell me which one is already powerful enough to gain such trust from you?

1

Pallidus127 t1_jeh0ca0 wrote

Current systems? Maybe GPT-4. I don’t know how much medical data is in it’s training dataset though. I’d rather have a version of chatgpt fine tuned on terabytes of medical data.

I think it’s not so much a huge amount of trust in the AI doctor as it is distrust in the U.S. medical system. Doctors only seem to care about getting you in and out as fast as possible. I don’t think any doctor is giving any real thought to my maladies. So why not have ChatGPT-4 order some tests and interpret the results? I doubt it could do any worse than the overworked doctor.

2

Sashinii t1_jegrst8 wrote

June 24th, 2026, lunch time.

Jokes aside, very soon, I think.

4

TemetN t1_jegw78p wrote

Define people I guess? A fifth? Half? Almost all? Like another commenter said, some people are already comfortable, and it's worth a reminder that in certain cases machine surgeons have been shown to outperform human ones. That said, even after that takes off, and ignoring the considerations of how much of the population, a huge amount of comfort will depend on soft factors such as early societal reactions and media.

​

I do think people will at least start to be comfortable in significant numbers sooner rather than later. Mid 2020s perhaps for enough for it to be relatively common (fifth-ish, enough so to not be shocking), and by 2030 for general acceptance (majority might consider one).

2

Aromatic_Highlight27 OP t1_jegwpz1 wrote

Let's put in another way. How long before it will be legal for a hospital (or a company), say, to make diagnosis and prescribe drugs without human doctors being involved in the process at any point?

3

TemetN t1_jegy8rk wrote

That's an interesting question, but I think it's probably even harder to answer honestly since that's largely a matter of social/cultural change. I'd particularly note how messy and incoherent our drug laws are in America in this case.

In practice I might actually expect something like a pill printer to leave this obsolete rather than it happening in some other way.

1

Aromatic_Highlight27 OP t1_jegyip1 wrote

A pill printer meaning people would be able to manufacture drugs at home? Even ignoring the feasibility, do you think this kind of devices would be legal themselves? Seems even worse than Ai-prescriber to me. Also, do you think this kind of capability will be available by mid 2020s as well?

1

TemetN t1_jegzdms wrote

Basically two things here, the first is that different rules for various products and loopholes mean they could likely pretty much just... sell it until the government did something. Possibly even outright admit what it was doing and the government might have trouble stopping it in the short term.

The second is that I think there'd probably be wholesale resistance to removing humans from the decision making chain in the short/medium term. Don't get me wrong, I actually would generally favor both of these (presuming they were both mature technologies), I just don't think it's going to be technical progress that necessarily slows the AI prescription part (arguably, that might be doable now).

1

simmol t1_jegylbm wrote

I would be very comfortable if there are layers of safety in play such that I am not getting an opinion on just one a single machine. For example, multiple independent AIs that come to the same conclusion would be reassuring and can be done readily. A reflective module that checks these answers can be useful as well. Once you add multiple layers of protection and this system is proven to be very safe, then I no longer need a human doctor.

2

Aromatic_Highlight27 OP t1_jegzp32 wrote

Yes, the question is, how soon do you think this will happen (system being proven to be very safe), and be legally and sociologically accepted?

2