Health

The computer sees you now! One in five GPs uses AI to diagnose and take notes, despite the risk of errors

A study finds that GPs are jeopardising patient safety by relying on AI to make diagnoses.

One in five GPs admit to using programs such as ChatGPT and Bing AI in their clinical practice, despite the lack of official guidelines for their use.

Experts warned that problems such as ‘algorithm biases’ could lead to misdiagnoses and that patient data is also at risk of being compromised. They said doctors should be made aware of the risks and called for legislation to cover their use in health care.

Researchers sent the survey to 1,000 GPs through the UK’s largest professional network for doctors, currently registered with the General Medical Council.

Clinicians were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google’s Bard; or “Other.” More than half of respondents (54 percent) were 46 years of age or older.

A study has found that one in five GPs use AI to make diagnoses (file image)

A study has found that one in five GPs use AI to make diagnoses (file image)

One in five (20 percent) reported using generative AI tools in their clinical practice.

Nearly one in three (29 percent) reported using these tools to generate documentation following patient appointments.

A similar number (28 percent) said they used the tips to suggest a differential diagnosis, the findings published in the BMJ showed.

One in four (25 percent) said they used the tools to suggest treatment options, such as possible medications or referrals.

The researchers – including scientists from Uppsala University in Sweden and the University of Zurich – said that while AI can be useful in supporting documentation, it is “prone to creating incorrect information”.

They write: “We caution that these tools have limitations, as they may contain subtle errors and biases.

“They also risk harm and undermine patient privacy because it is unclear how the internet companies behind generative AI use the information they collect.”

While chatbots are increasingly the target of regulatory efforts, it remains “unclear” how legislation will actually align with these tools in clinical practice, they added.

Researchers say AI, while useful for supporting documentation, is 'prone to creating misinformation' (file photo)

Researchers say AI, while useful for supporting documentation, is ‘prone to creating misinformation’ (file photo)

Doctors and medical specialists must be fully informed about the pros and cons of AI, especially because of the ‘inherent risks’ it carries, they conclude.

Professor Kamila Hawthorne, chair of the Royal College of GPs, admitted the use of AI is “not without potential risks” and called for strict regulation of its implementation in general practice to ensure the safety of patients and the security of their data.

She said: ‘Technology will always need to work alongside and complement the work of doctors and other healthcare professionals. It can never be seen as a replacement for the expertise of a qualified medical professional.

‘There is clear potential for the use of generative AI in general practice, but it is crucial that this is implemented carefully and closely regulated in the interests of patient safety.’

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button