OXFORD: A major new study has raised urgent warnings about the dangers of people turning to artificial intelligence for medical advice, highlighting how popular AI chatbots can provide inaccurate, inconsistent and potentially harmful guidance.
Researchers from the University of Oxford’s Internet Institute and the Nuffield Department of Primary Care Health Sciences found that large language models — the technology behind many AI chatbots — frequently fail when real people ask them to interpret symptoms or recommend health-care actions. The results, published in Nature Medicine, show that users relying on these systems were no more likely to make safe or correct decisions than those using traditional methods such as internet searches or their own judgment.
In one of the largest real-world studies to date, participants were given detailed medical scenarios — from severe headaches to persistent fatigue — and invited to consult AI tools. The researchers found that the responses often mixed good and poor advice, leaving users uncertain about what action to take.
“Despite all the hype, AI just isn’t ready to take on the role of the physician,” said Dr. Rebecca Payne, lead medical practitioner on the study. “Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed.”
Experts stress that while AI can assist with general information, it lacks the nuanced judgement and clinical context that trained health professionals provide. Independent research has also shown that people tend to over-trust AI responses, even when they are inaccurate.
Public health authorities are now calling for more rigorous testing, regulation and clear warnings to users about the limits of AI in high-stakes medical situations as adoption continues to grow.
