Don’t trust Dr. AI: New search engine provides medical advice that could lead to death in one in five cases
Looking up our symptoms online and self-diagnosing is something many of us are guilty of.
But Dr. AI could provide ‘potentially harmful’ medication advice, a worrying study has found.
German researchers found that more than a fifth of AI-powered chatbot answers to common drug questions “could lead to death or serious injury.”
Experts urged patients not to rely on such search engines to give them accurate and safe information.
Doctors were also warned not to recommend the devices until more “precise and reliable” alternatives become available.
German researchers found that more than a fifth of AI chatbot answers to common drug questions ‘could lead to death or serious injury’
In the study, scientists from the University of Erlangen-Nuremberg identified the 10 most frequently asked questions from patients for the 50 most commonly prescribed drugs in the US.
These include side effects, instructions for use and contraindications – reasons why the medication should not be taken.
Using Bing copilot – a search engine with AI-powered chatbot features developed by Microsoft – researchers reviewed all 500 responses, using answers from clinical pharmacists and physicians with expertise in pharmacology.
Responses were also compared to a peer-reviewed, up-to-date drug information website.
They found that statements about chatbots did not match reference data in over a quarter (26 percent) of all cases and were completely inconsistent in just over 3 percent.
But further analysis of 20 responses also revealed that four in ten (42 percent) would result in moderate or mild harm and 22 percent would result in death or serious harm.
The scientists, who also assessed the readability of all chatbot responses, found that the responses often required college-level training to understand.
Writing in the diary BMJ quality and safetythe researchers said: ‘CHatbot responses were largely difficult to read, and responses repeatedly lacked information or had inaccuracies, potentially compromising patient and medication safety.
“Despite their potential, it is still critical that patients consult their healthcare providers, as chatbots do not always generate error-free information.
“Caution should be exercised in recommending AI-powered search engines until citation engines with higher accuracy rates are available.”
A Microsoft spokesperson said: “Copilot answers complex questions by distilling information from multiple sources into a single answer.
‘Copilot provides linked quotes to these answers, allowing the user to further explore and investigate, just like with traditional search.
‘For questions about medical advice, we always recommend consulting a healthcare provider.’
The scientists, who also assessed the readability of all chatbot responses, found that the responses often required college-level training to understand
The scientists also acknowledged that the study had “several limitations,” including the fact that it was not based on real patient experiences.
In reality, patients could, for example, ask the chatbot for more information or ask it to provide answers in a clearer structure, they said.
It comes as doctors were warned last month that they could be putting patient safety at risk by relying on AI to help with diagnoses.
Researchers sent the survey to 1,000 GPs using the UK’s largest professional network for doctors currently registered with the General Medical Council.
One in five admitted to using programs such as ChatGPT and Bing AI in clinical practice, despite there being no official guidance on how to work with them.
Experts warned that issues such as ‘algorithm bias’ could lead to misdiagnoses and patient data is also at risk of being compromised.
They said doctors should be made aware of the risks and called for legislation to cover its use in healthcare.