EXPLAINABLE AI IN HEALTHCARE: BRIDGING THE GAP BETWEEN MODEL ACCURACY AND INTERPRETABILITY
Keywords:
Explainable AI, SHAP, LIME, Grad-CAM, Clinical Decision SupportAbstract
Artificial intelligence (AI) is transforming healthcare by enabling highly accurate diagnostics, personalised treatment planning, and efficient clinical operations. Yet the opacity of advanced machine-learning models remains a barrier to trust and widespread adoption. This paper provides a structured review of explainable AI (XAI) techniques that reconcile predictive strength with interpretability. We examine model-agnostic methods, including Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and model-specific approaches such as attention mechanisms and Gradient-weighted Class Activation Mapping (Grad-CAM). Empirical evidence drawn from recent clinical studies demonstrates that XAI significantly enhances decision-making. In radiology, Grad-CAM visualisations increased clinician confidence by 30%, while SHAP explanations in electronic health record diagnostics improved trust by 25%. Large-scale chest X-ray experiments (10,000 images) showed that SHAP and LIME maintained high predictive accuracy of 90 % and 89 %, respectively, compared with 92 % for a baseline deep neural network, while providing markedly higher interpretability scores. Patient-centred trials further revealed a 25% improvement in diabetes treatment adherence when AI recommendations were accompanied by high-quality explanations, with compliance rising 5% for every one-point increase in explanation quality (). These results confirm that XAI can strengthen clinician trust and patient engagement with only minimal loss in accuracy. Remaining challenges include computational cost, absence of standardised interpretability metrics, and evolving regulatory requirements. We recommend the development of hybrid models with intrinsic interpretability, co-designed evaluation frameworks, and educational initiatives to prepare clinicians and patients to act on XAI outputs.
Published
How to Cite
Issue
Section
Copyright (c) 2025 Ayanlowo Emmanuel A., Olawale Olalekan Onalaja, Obadina O. Gabiel

This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
Most read articles by the same author(s)
- Abayomi Opeoluwa Kehinde, Mariam Adeyinka Onafowokan, Olawale Olalekan Onalaja, LEVERAGING MACHINE LEARNING TECHNIQUES FOR THE PREDICTION AND ENHANCEMENT OF FOOD SAFETY STANDARDS IN NIGERIA: A DATA-DRIVEN APPROACH TO IDENTIFYING AND MITIGATING CONTAMINATION RISKS , FUDMA JOURNAL OF SCIENCES: Vol. 9 No. 4 (2025): FUDMA Journal of Sciences - Vol. 9 No. 4