AI in communicating with patients: more empathetic and better than doctors

Chat GPT

For a some time now, the facts have undoubtedly proven: Doctors spend too much time with paperwork and bureaucracy and it is necessary to react. In this sense, the answer is sought whether they can somehow help the ubiquitous so-called chatbots when answering patients’ queries?

Why do we need research?

Lockdown and contact restrictions at the beginning of the COVID-19 pandemic have significantly promoted the use of virtual solutions in healthcare. Consequently, the amount of e-mail messages sent to doctors increased significantly. Practice shows that with each such message, the workload per patient is extended by about 2.5 minutes (research by Ayers et al. from the University of California, San Diego).

Long working hours and additional obligations did not leave the medical profession without consequences. 62% of U.S. clinicians have reported at least one symptom of burnout in the first two pandemic years.

Ayers and his team examined whether an AI chatbot assistant or chatgpt could help respond to patient inquiries. They used randomly selected questions from the Public Online Forum on Health Problems (ASWDOCS) to which they responded in October 2022. responded by verified doctors. These questions were forwarded to the chatbot on 22. and 23. December 2022 A group of medical experts evaluated the responses of doctors and chatbots in terms of quality and empathy on a scale of 1–5. Three independent estimates were taken on average in measuring the end result.

Results and conclusions

Artificial intelligence (AI) has won the research:

  • in 78.6% of cases, evaluators preferred chatbot words to doctors. The latter were significantly shorter than AI on average (52 words versus 211 words).
  • In general, the quality of mechanically created responses was rated significantly better. For example, the prevalence of statements with good or very good quality in a chat bot was 3.6 times higher than with a doctor.
  • In addition, AI responses were considered significantly more empathetic than doctors’ data. The share, which was rated as sensitive or highly sensitive, was an average of 45.1% versus 4.6%. This corresponds to a 9.8 times higher prevalence of sensitive or highly sensitive chat responses.

Based on these results, researchers come to the conclusion that further research into the technology in a clinical setting is absolutely necessary. They advocate using chatbots to generate answers to patient questions in the future, which serve as a template for doctors and can be verified and processed afterwards.

Doctors should learn to integrate these new tools into their daily practice, commented Dr. Teva Brendervon of the University of California, San Francisco. Of course, there exist risks in this approach. However, he is “carefully optimistic” that AI improves the health system, reduces burnout among doctors and, above all, gives them the opportunity to spend more time with patients than at the computer.

With additional consideration of such research, it remains unclear how to return and improve neglected doctor-patient communication in the ambient of classical medicine, given the nonverbal conclusions that are made in the during classic conversation at the consultation (medical history, clinical examination, insight into medical records, etc.).