Title LLMs and XAI for breast cancer transparency: a review
Authors Dastgeer, Sobia ; Treigys, Povilas
DOI 10.22364/bjmc.2025.13.2.12
Full Text Download
Is Part of Baltic journal of modern computing.. Riga : University of Latvia. 2025, vol.13, iss. 2, p. 528-550.. ISSN 2255-8942. eISSN 2255-8950
Keywords [eng] Artificial intelligence ; deep learning ; breast cancer ; healthcare ; explainable AI ; Large Language Models
Abstract [eng] The rising global mortality rate of women due to breast cancer highlights the urgent need for advancements in its diagnosis and early detection. Early identification of breast cancer significantly improves patient prognosis and survival outcomes. Artificial intelligence (AI), particularly Deep Learning (DL) and Large Language Models (LLMs), shows transformative potential in enhancing the diagnostic and prognostic capabilities in breast cancer detection. However, their clinical adoption remains challenged due to their ”black-box” nature. Intelligent systems in healthcare, understanding the reasoning behind AI decisions is as critical as ensuring their performance, accuracy as well as patient safety and trust. Explainable AI (XAI) addresses these challenge by making AI reasoning transparent, allowing clinicians to interpret, validate, and trust model outputs. This paper reviews the application of XAI methods like SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradientweighted Class Activation Mapping (Grad-CAM) in improving the transparency of DL models for breast cancer detection. This paper explores advanced XAI strategies that balance accuracy with interpretability, including attention-based mechanisms and LLM-driven explanations. In particular we discuss LLMs embedded within XAI systems, act as translational interfaces, decoding complex model outputs into clinician-friendly explanations. By adapting technical explanations to the end user’s context and needs, LLMs enhance the accessibility and interpretability of complex model explanations. Collectively, these approaches help to bridge the gap between AI behavior and human understanding, ultimately improving transparency, trust and decision support especially in healthcare domain.
Published Riga : University of Latvia
Type Journal article
Language English
Publication date 2025
CC license CC license description