Viewing a single comment thread. View all comments

MrEloi t1_jbe0c0x wrote

I have just retired from another medical domain.

TBH 95%+ of my job could have been automated.

A nurse or similar with basic training could have operated the equipment, and an AI could have instructed her/him of the required actions.

My main contribution was quizzing the patient to elicit what really was going on, and not what they said was happening.

A personal AI avatar could do this work - or the nurse could be prompted to ask a series of targetted questions.

No doubt, many medical domains could be fully (or almost so) automated,

1

WolfInAMonkeySuit t1_jbe5zit wrote

Everybody lies.

The AI tools we have now seem too trusting and take users input at literal value. I wonder what research would suggest about making AIs more skeptical towards humans that need their help.

Also, trusting an AI that doesn't trust its users sounds sketchy.

1

MrEloi t1_jbf97nv wrote

>Everybody lies.

In medicine, patients often say X but mean Y.

It's not really lying.

As a practitioner, it's your job to drag this info out of them.

1