"The groundwork of all happiness is health." - Leigh Hunt

Is ChatGPT in your doctor's inbox?

May 3, 2023 – What happens if a chatbot sneaks into your doctor's direct messages? Depending on who you ask, it could improve outcomes. On the opposite hand, it could also raise some red flags.

The consequences of the COVID-19 pandemic are far-reaching, especially in terms of the frustration of not having the ability to reach a physician for an appointment, let alone get answers to health questions. And with the appearance of telemedicine and a major increase in electronic patient messaging over the past three years, inboxes are filling up quickly while physician burnout is increasing.

The old adage that timing is every little thing holds true, especially since technological advances in the sphere of artificial intelligence (AI) have gained rapid momentum over the past 12 months. The solution to overflowing inboxes and delayed responses may lie in AI-powered ChatGPT, This has been shown to significantly improve the standard and tone of responses to patient questions, in keeping with the outcomes of a study conducted in JAMA Internal Medicine.

“There are millions of people out there who aren't getting answers to their questions, so they post their questions on public social media forums like Reddit Ask Docs and hope that eventually, somewhere, an anonymous doctor will respond and give them the advice they're looking for,” said Dr. John Ayers, lead study writer and a computational epidemiologist on the Qualcomm Institute on the University of California-San Diego.

“AI-powered messaging means doctors can spend less time worrying about verb conjugation and more time focusing on medicine,” he said.

r/Askdocs vs. Ask Your Doctor

Ayers refers back to the Reddit subforum r/Questions and Answersa platform that gives patients with answers to their most pressing medical and health questions with guaranteed anonymity. The forum has 450,000 members, and at the least 1,500 are actively online at any given time.

For the study, he and his colleagues randomly chosen 195 Reddit exchanges (consisting of individual patient questions and doctor responses) from forums from last October, then fed each full-text query right into a latest chatbot session (meaning it was freed from previous questions that would skew the outcomes). The query, doctor response, and chatbot response were then stripped of any information that may indicate who (or what) was answering the query—and subsequently reviewed by a team of three licensed health professionals.

“Our initial study shows surprising results,” Ayers said, citing findings that showed healthcare professionals preferred chatbot-generated responses to doctor-generated answers by a ratio of 4:1.

The reasons for the preference were easy: higher quantity, quality, and empathy. Not only were the chatbot's responses significantly longer (211 words to 52 words on average) than the doctors', however the proportion of doctors' responses rated as “less than acceptable” was over ten times higher than the chatbot's (which were mostly “better than good”). And in comparison with the doctors' responses, the chatbot's responses were rated significantly higher on bedside behavior more often, leading to a 9.8 times higher prevalence of rankings of “empathetic” or “very empathetic.”

A world filled with possibilities

The last decade has shown that there’s a world of possibilities for AI applications, from the creation of on a regular basis virtual taskmasters (like Apple’s Siri or Amazon’s Alexa) to Correction of inaccuracies within the history of past civilizations.

In healthcare, AI/machine learning models are being integrated into diagnosis and data evaluation, for instance to speed up the evaluation of X-ray, computed tomography and magnetic resonance imaging data or to assist researchers and clinicians mix and evaluate Vast amounts of genetic and other data to learn more concerning the connections between disease and fuel discoveries.

“The reason this topic is relevant now is that the release of ChatGPT has finally made AI accessible to millions of doctors,” said Dr. Bertalan Meskó, director of the Medical Futurist Institute. “What we need now is not better technologies, but preparing healthcare professionals to use such technologies.”

Meskó believes that AI plays a crucial role in automating data-based or repetitive tasks. He notes that “any technology that improves the doctor-patient relationship has its place in healthcare.” He also emphasizes the necessity for “AI-based solutions that improve the doctor-patient relationship by giving patients more time and attention to devote to each other.”

The “how” of integration shall be crucial.

“I think AI definitely offers opportunities to alleviate the problem of physician burnout and give them more time to spend with their patients,” said Kelly Michelson, M.D., MPH, director of the Center for Bioethics and Medical Humanities at Northwestern University Feinberg School of Medicine and attending physician at Ann & Robert H. Lurie Children's Hospital of Chicago. “But there are many subtle nuances that physicians need to consider when interacting with patients that, at least currently, cannot be translated by algorithms and AI.”

Michelson argued that AI should only be a complement at this stage.

“We need to think carefully about how we integrate it and not just use it to adopt one thing until it's better tested, including message response,” she said.

Ayers agreed.

“It's actually just a phase zero study. And it shows that we should now move on to patient-centered trials with these technologies and not just arbitrarily flip the switch.”

The patient paradigm

When it involves the patient side of ChatGPT messaging, several questions come to mind, including relationships with their healthcare providers.

“Patients want the ease of use of Google, but they also want the security that only their own provider can provide in answering their questions,” says Dr. Annette Ticoras, a board-certified patient advocate who works within the Columbus, Ohio, area.

“The goal is to ensure that doctors and patients share the highest quality information. The messages sent to patients are only as good as the data used to answer them,” she said.

This is particularly true with regard to bias.

“AI tends to be generated from existing data, so if the existing data has biases, those biases will be carried over into the results developed by the AI,” Michelson said, referring to an idea called the “black box.”

“The problem with more complex AI is that we often can't tell what causes it to make a particular decision,” she said. “You can't always figure out whether that decision is based on existing imbalances in the data or some other underlying problem.”

Nevertheless, Michelson is hopeful.

“We must be a strong advocate for patients and ensure that whenever and however AI is integrated into healthcare, we do so in a thoughtful, evidence-based way that does not diminish the essential human component in medicine,” she said.