ABSTRACT
The rapid integration of artificial intelligence (AI) into clinical practice prompts a critical re-examination of the roles of physicians and how we educate them. While AI promises unparalleled gains in accuracy and speed, and better management decisions and health outcomes, doctors must be skilled in harnessing these new AI tools effectively and wisely to improve patient outcomes. We seek to layer further upon this with a call for medical education to go further than simply improving AI literacy of doctors, but to include a comprehensive reform of medical education. This reform would aim to expand physician capabilities from the traditional cognitive knowledge of medicine to integrating AI competencies seamlessly, with a renewed focus on the humanistic aspects of medicine. We propose the Humanistic Medicine - AI-Enabled Education (HuMe-AiNE) framework, which includes the key components: (1) standardisation and individualisation of AI competencies; (2) integration of AI tools through the curriculum; (3) fostering critical thinking skills in integrating technological solutions with a humanistic approach to patient care; and (4) developing a professional identity that encompasses both technology-related and humanistic capabilities. The AI revolution provides an opportunity for developments to medical education—to train doctors to be both tech-enabled physicians and true humanists.
Standing at the precipice of a new era in healthcare, the integration of artificial intelligence (AI) into clinical practice is progressing at an unprecedented pace. From AI algorithms detecting tumours with remarkable accuracy to predictive models forecasting patient outcomes, these technological marvels are not only changing how we practice medicine; they’re redefining it. A landmark study by McKinney et al. demonstrated that AI can outperform human radiologists in breast cancer screening, reducing false positives by 5.7% and false negatives by 9.4%.1 This transformation echoes the seismic shifts that led to the Flexner Report and the Lancet Commission, once again forcing us to re-examine the role of physicians and how we educate them.
The impact of AI on healthcare is multifaceted. At the individual level, AI promises to improve the speed and accuracy of diagnosis, recommend optimal treatment courses and prognosticate outcomes. Rajkomar et al. showed that deep learning models can predict in-hospital mortality, 30-day unplanned readmission, prolonged length of stay and final discharge diagnoses with high accuracy.2 On a broader scale, it has the potential to enhance the efficiency and accessibility of healthcare services, and even influence population health by modifying behaviours. For instance, AI-powered chatbots have shown promise in delivering cognitive behavioural therapy for mental health conditions, potentially expanding access to care.3
These advancements compel a reflection on how to truly utilise them in medicine. Many voices in the medical community have called for a focus on AI literacy, that is, the ability to use these new tools effectively and understand their limitations. This community envisions the doctor of tomorrow as a “tech-enabled physician” who is skilled in harnessing the power of AI to improve patient care. A scoping review of undergraduate medical education4 found that most AI curriculum objectives are centred around conceptual foundations needed to work and manage AI systems, knowing the ethical and legal implications of AI-dependent systems, and the critical appraisal of AI systems. However, the study also found no clear consensus on how to deliver such an AI curriculum. At the postgraduate level, a separate review found that availability of AI-specific educational material was predominant only in certain fields such as radiology, ophthalmology and cardiology.4 While there has already been a burgeoning increase in use cases of AI in medical education in recent years, calls advocating for frameworks that incorporate technical and ethical aspects of AI into medical education have also become more fervent.5 The time is ripe, then, to re-examine how to best address the following current gaps in medical education: lack of clarity and consistency about the types AI skills and knowledge needed both at undergraduate and postgraduate levels; the current siloed approach of AI competencies being taught separately from clinical skills and knowledge; and a lack of understanding about the relationship between AI and human elements in patient care. All these gaps are undoubtedly important to address, even if they are still only part of the picture.
Verghese et al. remind us that medicine is more than just diagnosing ailments and prescribing treatments. It is about understanding the human experience of illness and connecting with patients on a deeply personal level. In this seminal paper “What This Computer Needs Is a Physician: Humanism and Artificial Intelligence,” Verghese et al. argued that the rise of AI necessitates a renewed focus on the human aspects of care.6 Topol suggests that instead of dehumanising medicine, AI can actually free physicians to focus on these crucial humanistic aspects of care.7 In his book “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again”, Topol envisions a future where AI handles time-consuming data analysis, allowing doctors to devote more energy to empathetic communication and building meaningful relationships with their patients.7
A recent review on supportive and palliative care in oncology patients found that machine learning models were able to help oncologists predict clinical outcomes such as mortality and complications more effectively. This enables prioritisation of patients requiring complex decision-making, serious illness communication and advance care planning, which are tasks that require both clinical knowledge and relational skills such as empathy, trust and compassion.8 In psychiatry, documentation of medical records, synthesisation of information for better diagnosis, treatment personalisation and prediction of treatment response have been suggested as tasks where AI can play a role, to enable psychiatrists to focus on building therapeutic relationships with their patients.9
Empathy, compassion and trust are foundational values for a patient-centred model of care, and broadly recognised as fundamental to good healthcare practice. However, these are not simply feel-good ideas—relational models of care have been shown to improve patient outcomes and boost the well-being of healthcare practitioners themselves. A meta-analysis by Kelley et al. found that empathic and patient-centred communication is associated with better adherence to treatment, improved patient satisfaction and better clinical outcomes.10
A review that outlined recommendations for the UK’s National Health Service in the use of technology for patient care noted the limitations of AI in areas such as building trust and delivering care with empathy and compassion.11 However, it notes that the introduction of such technologies bestows the clinician with “the gift of time…[which] will bring a new emphasis on the nurturing of the precious inter-human bond based on trust, clinical presence, empathy and communication.” In the examples above, the clinician is not just a tech-enabled physician—employing AI for more efficient and effective diagnosis and treatment prediction—but also a true humanist, in delivering care that acknowledges and respects the patient’s values and choices.
So, how do we get there? Just as the Flexner Report and Lancet Commissions charted new courses for medical education in their time, we now need a reimagining of how we train the doctors of tomorrow. This calls for a comprehensive reform of medical education—one that expands physicians’ capabilities and cognitive knowledge, and integrates AI competencies seamlessly with a renewed focus on the humanistic aspects of medicine.
We propose a framework, which we term HuMe-AiNE (Humanistic Medicine – AI-Enabled Education). This approach aims to create a new paradigm of medical education that prepares future physicians and upskills current physicians to harness the full potential of AI while maintaining the core humanistic values that define our profession.
Key components of this framework include: (1) standardisation and individualisation of AI-related competencies. We need to establish a core set of AI competencies that all medical graduates should possess, while also allowing for specialisation based on individual interests and career paths. For everyone, the competencies could include base levels of understanding of areas such as machine learning algorithms, data science, concepts underlying AI and critical appraisal of AI-generated outputs. For some select few, application, analysis and synthesis of these domains may be needed. Given the fast moving nature of AI advancements, frequent reviews of AI-related competencies will be required. (2) Integration of AI tools throughout the curriculum: AI is already influencing diagnosis and treatment decisions in healthcare, and learners need to be informed on how to use AI tools effectively. Rather than treating AI as a separate subject, it should be woven into all aspects of medical education, starting from undergraduate medical education and through to residency years where appropriate. For example, when teaching about diagnostic imaging, learners should become familiar with both traditional interpretation methods and AI-assisted diagnosis tools. Gordon et al. outlined specific suggestions for the use of AI tools in medical education, comprising AI-assisted tutoring systems and learner assessments, robot-assisted surgery simulations, chatbots to assist with clinical management, patient communications, and enhanced anatomy education.5 (3) Fostering habits of critical inquiry together with the humanistic ability to deal with uncertainty: complexity and ambiguity are hallmarks of medicine, and while AI may provide some level of assistance, doctors will need critical thinking skills in applying technological solutions in empathic and ethical ways, integrating these with socially-constructed knowledge of the patient’s illness experience in the gestalt of medical expertise. Development of critical thinking skills, perspective taking, judgement and reasoning, together with the ability to cope with uncertainty may be fostered through the use of case-based scenarios, observations in clinical settings coupled with reflective practice, as well as discussions in a multidisciplinary setting. These skills will become even more critical in preparing future and current physicians to grapple with ethical challenges associated with AI in clinical decision-making. (4) Reformulating professional identity formation in the AI era: as the role of physicians evolves, so must their professional identity. We need to help learners develop a sense of professional self that embraces technology while maintaining a strong commitment to delivering patient-centred and empathetic care, often in multidisciplinary teams. Physicians who are already in practice will need to renegotiate their own professional identity as AI makes ever-increasing inroads into their professional spaces. The medical community will thus need role models and exemplars of physicians who successfully adapt their sense of professional self in line with the evolving technological landscape.
Implementing this framework poses significant challenges. We will need to rethink everything from admission criteria to medical school, to assessment methods used in residency training and continuing professional development. Wartman and Combs suggest that medical school admissions should consider not only academic achievement but also emotional intelligence and adaptability to change.12 We will have to upskill existing faculty, as many would not have been trained with these new paradigms. A multidisciplinary faculty, with experts in medicine, computer science, ethics and education will be needed to design curricula that are both technically rigorous and humanistically grounded. Clinician competencies need to be expanded to address critical issues such as trust, explainability and interpretability of AI systems in clinical practice.
Moreover, we must ensure that the integration of AI into healthcare does not exacerbate the existing disparities in access to technology and healthcare. This is especially pertinent in regions such as Southeast Asia, where access to technology is limited. While Singapore can lead the way in making AI-assisted healthcare technologies available and accessible to lower-income countries in the region, physicians must also be aware of algorithmic biases inherent in AI. Obermeyer et al. found that a widely used algorithm for guiding health decisions exhibited significant racial bias, demonstrating the potential for AI to perpetuate or even amplify existing inequalities.13 Several papers have contributed important insights into the ethical challenges that must be addressed when incorporating AI into healthcare and medical education.5,14,15 These include the need for data privacy and security regulation, automation bias and skill preservation, transparency and informed consent. They also acknowledge the difficulty of equipping learners to grapple with these issues at a time when society has neither fully understood the ethical implications of AI nor resolved how it will respond to the ethical challenges. At least 1 paper has called for institutions to respond proactively by setting up Education Ethics Boards, similar in authority to Research Ethics Boards, to focus on how AI is used in medical education.14 This would also enable institutions to ensure that AI-integrated medical education is carried out in accordance with legal frameworks such as Singapore’s Personal Data Protection Act and those of regional bodies of governance. Given the far-reaching and transformative role of AI, our curriculum must emphasise the societal implications of AI in healthcare, and equip future physicians to use these tools judiciously and ethically.
Programme evaluation is critical to document and identify the impact of integrating AI into medical education—the desired and undesired effects, as well as the unexpected ones. This remains a challenging area due to the fast-moving nature of AI-related initiatives, the intersection of AI with ethical challenges, the implications of AI on society and on healthcare relationships and practices. While literature is abundant in terms of AI-related educational initiatives, evaluation of educational outcomes of these programmes has so far remained limited.4 Most have either described positive immediate outcomes where participants were overall satisfied with the AI content learned, or short-term outcomes in which learners demonstrated the acquisition of a variety of AI-related competencies and skills. Using the Kirkpatrick model of training evaluation, these would be categorised as Levels 1 and 2 (reaction: learner satisfaction and learning: skills acquisition, respectively) outcomes.16 There is a critical need to invest in comprehensive evaluations of the long-term impacts of AI—on both the overall medical education landscape and specific learning outcomes. Such evaluations would focus on outcomes categorised as Level 3 (behaviour: degree to which learners apply what they have learned when back at work) and Level 4 (results: degree to which organisational practices and/or patient outcomes are changed) using the Kirkpatrick model.
The way forward is not without obstacles, but the potential rewards are immense. By embracing AI as a tool for enhancing rather than replacing human connection, we can create a healthcare system that is both technologically advanced and deeply compassionate. Lin et al. found that AI-assisted diagnosis, when combined with physician expertise, led to better outcomes than either AI or physicians alone,17 thus highlighting the synergistic potential of human-AI collaboration.
As we navigate this AI revolution, let’s not lose sight of Peabody’s timeless wisdom: “the secret of the care of the patient is in caring for the patient.”18 Our challenge—and our opportunity—is to use AI not as a replacement for human care but as a tool to enhance it. The future of medicine lies not in choosing between technology and humanity, but in skilfully blending both. It is time for medical education to lead the way in nurturing a new generation of physicians who are as adept with algorithms as they are with empathy.
CONCLUSION
The AI revolution in healthcare presents both unprecedented challenges and extraordinary opportunities for medical education. By reimagining our approach to training and upskilling physicians, we can ensure that the doctors of today and tomorrow are equipped not only with cutting-edge technical skills but also with the timeless human qualities that lie at the heart of good medicine. As we embark on this journey of educational transformation, let us be guided by a vision of healthcare that harnesses the power of technology to amplify, rather than diminish, our humanity.
REFERENCES
- McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature 2020;577:89-94.
- Rajkomar A, Oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. NPJ Digit Med 2018;1:18.
- Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Ment Health 2017;4:e19.
- Tolentino R, Baradaran A, Gore G, et al. Curriculum frameworks and educational programs in AI for medical students, residents and practicing physicians: scoping review. JMIR Med Educ 2024;10:e54793.
- Gordon M, Daniel M, Ajiboye A, et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Med Teach 2024;46:446-70.
- Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: Humanism and artificial intelligence. JAMA 2018;319:19-20.
- Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books; 2019.
- Reddy V, Nafees A, Raman S, et al. Recent advances in artificial intelligence applications for supportive and palliative care in cancer patients. Curr Opin Support Palliat Care 2023;17:125-34.
- Lee J, Wu AS, Li D, et al. Artificial Intelligence in Undergraduate Medical Education: A Scoping Review. Acad Med 2021;96:S62-70.
- Kelley JM, Kraft-Todd G, Schapira L, et al. The influence of the patient-clinician relationship on healthcare outcomes: A systematic review and meta-analysis of randomized controlled trials. PLoS One 2014;9:e94207.
- Topol E. The Topol review: preparing the healthcare workforce to deliver the digital future. London: National Health Service; 2019.
- Wartman SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Acad Med 2018;93:1107-9.
- Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366:447-53.
- Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide no 158. Med Teach 2023;45:574-84.
- Weidener L, Fischer M. Teaching AI ethics in medical education: a scoping review of current literature and practices. Perspect Med Educ 2023;12:399-410.
- Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs: The Four Levels. Oakland: Berrett-Koehler Publishers; 2006.
- Lin H, Li R, Liu Z, et al. Diagnostic efficacy and therapeutic decision-making capacity of an artificial intelligence platform for childhood cataracts in eye clinics: A multicentre randomized controlled trial. EClinicalMedicine 2019;9:52-9.
- Peabody FW. The care of the patient. JAMA 1927;88:877-82.
The author declares there are no affiliations with or involvement in any organisation or entity with any financial interest in the subject matter or materials discussed in this manuscript.
Dr Michelle Jong, Group Clinical Education, National Healthcare Group, Annex @ National Skin Centre, 1 Mandalay Road, Level 3, Singapore 308205. Email: [email protected]