• Vol. 52 No. 7, 374–377
  • 28 July 2023

Leveraging ChatGPT to aid patient education on coronary angiogram

,
,

ABSTRACT

Natural-language artificial intelligence (AI) is a promising technological advancement poised to revolutionise the delivery of healthcare. We aim to explore the quality of ChatGPT in providing medical information regarding a common cardiology procedure—the coronary angiogram—and evaluating the potential opportunities and challenges of patient education through this natural-language AI model in the broader context. In a conversational manner, we asked ChatGPT common questions about undergoing a coronary angiogram according to the areas of: description of procedure, indications, contraindications, complications, alternatives, and follow-up. The strengths of the answers given by ChatGPT were that they were generally presented in a comprehensive and systematic fashion, covering most of the major information fields that are required. However, there were certain deficiencies in its responses. These include occasional factual inaccuracies, significant omissions, inaccurate assumptions, and lack of flexibility in recommendations beyond the line of questioning, resulting in the answers being focused solely on the topic. We would expect an increasing number of patients who may choose to seek information about their health through these platforms given their accessibility and perceived reliability. Consequently, it is prudent for healthcare professionals to be cognisant of both the strengths and deficiencies of such models. While these models appear to be good adjuncts for patients to obtain information, they cannot replace the role of a healthcare provider in delivering personalised health advice and management.


Natural-language artificial intelligence (AI) is a promising technological advancement poised to revolutionise the delivery of healthcare.1 Traditionally, inclusion of technology in the augmentation of healthcare communication comprised the use of chatbots, which is limited by a predetermined set of queries and matched answers.2 However, natural-language AI models prompt a paradigm shift, given that they can interpret colloquial inputs to provide new texts based on the large datasets that it was trained. Chat Generative Pre-trained Transformer (ChatGPT) is an example of an open natural-language AI conversational platform that has recently been developed. While there are many other natural-language models, ChatGPT’s free, intuitive, and user-friendly interface has attracted the most significant group of users and analysis of its output. It has not only revolutionised the non-medical fraternity with wide-ranging abilities, including creative writing, essay writing, prompt writing, code writing, and answering questions,3 but has also made inroads in the medical field.4 Given its accessibility and reported medical proficiency,5 we aim to explore the quality of ChatGPT in providing medical information regarding a common cardiology procedure— the coronary angiogram—and evaluating the potential opportunities and challenges of patient education through this natural-language AI model in the broader context.

In a conversational manner, we asked ChatGPT (https://openai.com/blog/chatgpt) common questions about undergoing a coronary angiogram according to the areas of: description of procedure, indications, contraindications, complications, alternatives, and follow-up.

The types of questions asked and the evaluation of the outputs’ strengths and deficiencies are outlined in Table 1 and Fig. 1, with the exact replies made available in the Supplementary Appendix.

Table 1. Evaluation of outputs by ChatGPT on patient education on coronary angiogram.

Fig. 1. Overview of strengths and deficiencies of ChatGPT patient education on coronary angiogram.

The strengths of the answers given by ChatGPT were that they were generally presented in a comprehen-sive and systematic fashion, covering most of the major information fields that are required. The language used was easy to understand by the layperson, and medical terminology unfamiliar to persons without clinical experience were avoided. Most responses also appropriately concluded that it was important to involve the healthcare professional in discussing the specific circumstances of the individual, and acknowledged its own deficiencies of providing personalised recommendations.

However, there were certain deficiencies in its responses. First, while infrequent, there were some factual inaccuracies. These included inadequate differentiation between antiplatelet and anticoagulation among blood thinners and the resultant decision to continue or discontinue it prior to the procedure; certain inaccurate indications for angiography (e.g. family history, prior stroke, monitoring); some inaccurate risks of angiography (e.g. blood clot—rather than calcifications—dislodgement by catheter); and incorrect contraindications (e.g. severe heart failure). Second, there were some significant omissions. For example, it excluded the important indication of active acute coronary syndromes, for which coronary evaluation via angiography would be recommended. Third, there were some inaccurate assumptions made. For example, it suggested that sedation is usually given but this may vary. Lastly, the model also appeared inflexible in recommendations beyond the line of questioning, resulting in the answers only focused on the topic. For example, the model was not able to consider non-cardiac causes of common clinical presentations of chest pain and breathlessness in the context of asking about symptoms and need for coronary angiography.

The shortcomings of a natural-language AI platform like ChatGPT may be due to several issues.

First, given that the models are limited by the data inputs, latest developments may not be fully captured by datasets that the software was trained on. Further, given that information had been derived from all sources of internet, there may be inaccurate information that are presented to the reader.

Second, the model required further probing and prompting for information that would otherwise have normally been given by healthcare providers as part of the counselling process. Such omissions in the initial explanation may result in potentially important information not being given to patients or caregivers that may subsequently impact their decision-making process.

Third, the inability for the model to be flexible in recommendations beyond the line of questioning may be due to the conversational nature and scoping of the topic. This may be acceptable if patients only require more knowledge about the particular topic, but it can be counter-productive when it involves looking laterally, especially for differential diagnoses.

Overall, the performance of ChatGPT was thought-provoking. We would expect an increasing number of patients who may choose to seek information about their health through these platforms given its accessibility and perceived reliability. Consequently, it is prudent for healthcare professionals to be cognisant of both the strengths and deficiencies of such models. The ability to harness and incorporate these models into our healthcare systems may transform and improve healthcare delivery bringing potential benefits to both patients and physicians alike, particularly with constant improvements of this model; the release of improved iterations, including the paid version of ChatGPT Plus, which operates on increasingly more datapoints and parameters, may further improve the accuracy and performance of their outputs.6 Nevertheless, while these models appear to be good adjuncts for patients to obtain information, they cannot replace the role of a healthcare provider in delivering personalised health advice and management.


Correspondence: Dr Samuel Ji Quan Koh and Dr Jonathan Jiunn-Liang Yap, Department of Cardiology, National Heart Centre Singapore, 5 Hospital Dr, Singapore 169609. Email: Dr Samuel Ji Quan Koh [email protected]; Dr Jonathan Jiunn-Liang Yap [email protected]


REFERENCES

1. Baumgartner C. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med 2023;13:e1206.
2. Bibault JE, Chaix B, Guillemassé A, et al. A Chatbot Versus Physicians to Provide Information for Patients With Breast Cancer: Blind, Randomized Controlled Noninferiority Trial. J Med Internet Res 2019;21:e15787.
3. Taecharungroj V. “What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn Comput 2023;7:35.
4. Xue VW, Lei P, WC C. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med 2023;13:e1216.
5. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health 2023;2:e0000198.
6. Ray PP. ChatGPT: A comprehensive review on background applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 2023;3:121–54.