top of page

ChatGPT Struggles with Emergency Department Decision-Making

acichc

From UCSF:


If ChatGPT were cut loose in the Emergency Department (ED), it might suggest unneeded x-rays and antibiotics for some patients and admit others who didn’t require hospital treatment, a new study from UC San Francisco has found.


The researchers said that, while the model could be prompted in ways that make its responses more accurate, it’s still no match for the clinical judgment of a human doctor.


With the current study, Williams challenged the AI model to perform a more complex task: providing the recommendations a physician makes after initially examining a patient in the ED. This includes deciding whether to admit the patient, get x-rays or other scans, or prescribe antibiotics.


For each of the three decisions, the team compiled a set of 1,000 ED visits to analyze from an archive of more than 251,000 visits. The sets had the same ratio of “yes” to “no” responses for decisions on admission, radiology and antibiotics that are seen across UCSF Health’s Emergency Department.


Using UCSF’s secure generative AI platform, which has broad privacy protections, the researchers entered doctors’ notes on each patient’s symptoms and examination findings into ChatGPT-3.5 and ChatGPT-4. Then, they tested the accuracy of each set with a series of increasingly detailed prompts.


Overall, the AI models tended to recommend services more often than was needed. ChatGPT-4 was 8% less accurate than resident physicians, and ChatGPT-3.5 was 24% less accurate.


Williams said the AI’s tendency to overprescribe could be because the models are trained on the internet, where legitimate medical advice sites aren’t designed to answer emergency medical questions but rather to send readers to a doctor who can.

1 view0 comments

Recent Posts

See All

Comentários


bottom of page