EXPLAINABLE AI IN HEALTHCARE: BRIDGING THE GAP BETWEEN MODEL ACCURACY AND INTERPRETABILITY

Authors

  • Ayanlowo Emmanuel A.
    Babcock University, Ilishan-Remo, Ogun State.
  • Olawale Olalekan Onalaja
    Department of Computer Science, Ogun State Polytechnic of Health and Allied Science, Ijebu, Ogun State, Nigeria.
  • Obadina O. Gabiel
    Department of Statistics, Olabisi Onabanjo University, Ago-Iwoye, Ogun State, Nigeria.

Keywords:

Explainable AI, SHAP, LIME, Grad-CAM, Clinical Decision Support

Abstract

Artificial intelligence (AI) is transforming healthcare by enabling highly accurate diagnostics, personalised treatment planning, and efficient clinical operations. Yet the opacity of advanced machine-learning models remains a barrier to trust and widespread adoption. This paper provides a structured review of explainable AI (XAI) techniques that reconcile predictive strength with interpretability. We examine model-agnostic methods, including Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and model-specific approaches such as attention mechanisms and Gradient-weighted Class Activation Mapping (Grad-CAM). Empirical evidence drawn from recent clinical studies demonstrates that XAI significantly enhances decision-making. In radiology, Grad-CAM visualisations increased clinician confidence by 30%, while SHAP explanations in electronic health record diagnostics improved trust by 25%. Large-scale chest X-ray experiments (10,000 images) showed that SHAP and LIME maintained high predictive accuracy of 90 % and 89 %, respectively, compared with 92 % for a baseline deep neural network, while providing markedly higher interpretability scores. Patient-centred trials further revealed a 25% improvement in diabetes treatment adherence when AI recommendations were accompanied by high-quality explanations, with compliance rising 5% for every one-point increase in explanation quality (). These results confirm that XAI can strengthen clinician trust and patient engagement with only minimal loss in accuracy. Remaining challenges include computational cost, absence of standardised interpretability metrics, and evolving regulatory requirements. We recommend the development of hybrid models with intrinsic interpretability, co-designed evaluation frameworks, and educational initiatives to prepare clinicians and patients to act on XAI outputs.

Dimensions

Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), 310.

Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.

Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721–1730.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50–57.

Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.

Huang, S., Cai, N., Pacheco, P. P., Narrandes, S., Wang, Y., & Xu, W. (2019). Applications of deep learning in precision oncology. Journal of Hematology & Oncology, 12(1), 101.

Kim, B., Park, J., & Lee, S. (2023). Explainable AI in radiology: Enhancing clinician trust with visual explanations. Journal of Medical Imaging, 10(2), 024501.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

Lee, C., Kim, H., & Park, Y. (2024). Trust in AI-driven EHR diagnostics: The role of explainability. Health Informatics Journal, 30(1), 145–156.

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765–4774.

Molnar, C. (2020). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Leanpub.

Patel, R., Sharma, A., & Gupta, S. (2024). Impact of explainable AI on patient compliance in diabetes management. Journal of Clinical Medicine, 13(3), 789.

Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., … & Ng, A. Y. (2017). Chexnet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision, 618–626.

Wiens, J., Saria, S., Sendak, M., Ghassemi, M., Liu, V. X., Doshi-Velez, F., … & Goldenberg, A. (2019). Do no harm: a roadmap for responsible machine learning for health care. Nature Medicine, 25(9), 1337–1340.

Zhang, L., Wang, Q., & Chen, X. (2024). Performance evaluation of explainable AI methods in medical imaging. Medical Image Analysis, 92, 103045.

Published

10-11-2025

How to Cite

Emmanuel A., A., Onalaja, O. O., & O. Gabiel, O. (2025). EXPLAINABLE AI IN HEALTHCARE: BRIDGING THE GAP BETWEEN MODEL ACCURACY AND INTERPRETABILITY. FUDMA JOURNAL OF SCIENCES, 9(11), 461 – 465. https://doi.org/10.33003/fjs-2025-0911-4213

How to Cite

Emmanuel A., A., Onalaja, O. O., & O. Gabiel, O. (2025). EXPLAINABLE AI IN HEALTHCARE: BRIDGING THE GAP BETWEEN MODEL ACCURACY AND INTERPRETABILITY. FUDMA JOURNAL OF SCIENCES, 9(11), 461 – 465. https://doi.org/10.33003/fjs-2025-0911-4213

Most read articles by the same author(s)