Tech & Gadgets

ChatGPT diagnoses diseases better than human doctors: study

ChatGPT was able to outperform human doctors in diagnosing diseases and medical conditions in a study. The study’s findings were published last month and highlighted that artificial intelligence (AI) chatbots could be more efficient at analyzing patients’ histories and conditions and make more accurate diagnoses. Although the study aimed to understand whether AI chatbots could help doctors make better diagnoses, the results unexpectedly revealed that OpenAI’s GPT-4-powered chatbot performed much better without human assistance than when paired with a doctor.

ChatGPT outperforms doctors in diagnosing diseases

The studypublished in the JAMA Network Open journal, was conducted at Beth Israel Deaconess Medical Center in Boston by a group of researchers. The experiment aimed to find out if AI can help doctors better diagnose diseases compared to traditional methods.

According to a New York Times reportThe experiment involved 50 physicians, a mix of residents and physicians attending the medical college. They were recruited through several major hospital systems in the US and obtained six case histories from patients. The subjects were reportedly asked to propose a diagnosis for each of the cases and to provide an explanation as to why they favored or excluded certain diagnoses. It was said that doctors were also judged based on whether their final diagnosis was correct.

To evaluate the performance of each of the participants, medical experts were reportedly selected as evaluators. Although they would be shown the answers, they were not told whether the answer came from a doctor with access to AI, just the doctor, or just ChatGPT.

To eliminate the possibility of unrealistic case studies, the researchers reportedly chose case studies of real patients that have been used by researchers for decades but never published to avoid contamination. This point is important because ChatGPT cannot be trained on data that has never been published.

The study’s findings were surprising. Doctors who did not use an AI tool to diagnose the cases had an average score of 74 percent, while the doctors who used the chatbot scored an average of 76 percent. However, when ChatGPT analyzed only the case studies and made a diagnosis, it scored an average of 90 percent

While several factors could have influenced the outcome of the study – from the doctors’ experience level to individual biases in certain diagnoses – the researchers believe the study highlights that the potential of AI systems in medical settings cannot be ignored.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button