(CTN News) – Riley Lyons is a fourth-year ChatGPT’s ophthalmology resident at Emory University School of Medicine. He’s responsible for triaging patients with eye-related complaints.
Many of his patients turn to “Dr. Google” first. Online, Lyons said, they might find that “any number of terrible things can be going on.”
His fellow Emory ophthalmologists suggested Lyons evaluate ChatGPT’s accuracy in diagnosing eye complaints, so he jumped at the chance.
Lyons and his colleagues reported in medRxiv, an online publisher of health science preprints, that ChatGPT performed better than WebMD’s symptom checker and human doctors who reviewed the same symptoms.
Despite ChatGPT’s well-known “hallucination” problem – its tendency to sometimes make outright false statements – the Emory study found that when presented with a standard set of eye complaints, it made no “grossly inaccurate” statements.
Lyons and his coauthors were surprised by ChatGPT’s relative proficiency. Co-author Nieraj Jain, who specializes in vitreoretinal surgery and diseases at Emory Eye Center, said the artificial intelligence engine “is definitely better than Google,” he said.
ChatGPT Artificial intelligence filling in the gaps
However, the results highlight a challenge for the health care industry as it assesses the promise and pitfalls of generative AI.
Even though chatbots may be more accurate than Dr. Google, there are still a lot of questions about how to integrate this new technology into the health care system with the same safeguards that have always been in place for drugs or medical devices.
Generative AI has drawn extraordinary attention from all sectors of society, with some comparing its future impact to the internet itself. Radiology and medical records are among the areas where companies are using generative AI feverishly.
Even though consumer chatbots are already widely available – and better than many alternatives – there’s still caution. Many doctors think AI-based medical tools should be approved like drugs, but it’ll take years. It’s unclear how such a regime might apply to chatbots.
“We have access issues, and whether or not it’s a good idea to deploy ChatGPT to cover the gaps in access, it’s going to happen and it’s happening already,” said Jain. We need to understand its potential advantages and pitfalls. People have discovered its utility.
Good bedside manners for bots
Emory’s study isn’t the only one confirming the relative accuracy of chatbots. Scientists at Google wrote in Nature in early July that Med-PaLM, an AI chatbot the company built for medical use, “compares favorably with answers from clinicians.”
AI might also have a better bedside manner. According to a study published in April by researchers at UC San Diego and other institutions, health care professionals rated ChatGPT answers as empathetic.
The companies are exploring how chatbots could be used for mental health therapy, and some investors think healthy people will like chatting and bonding with an AI “friend” too.
One of the most advanced AI companions is Replika, which is marketed as, “The AI companion who cares. Always here to listen and talk. Always by your side.”