Abstract [eng] |
The rising global mortality rate of women due to breast cancer highlights the urgent need for advancements in its diagnosis and early detection. Early identification of breast cancer significantly improves patient prognosis and survival outcomes. Artificial intelligence (AI), particularly Deep Learning (DL) and Large Language Models (LLMs), shows transformative potential in enhancing the diagnostic and prognostic capabilities in breast cancer detection. However, their clinical adoption remains challenged due to their ”black-box” nature. Intelligent systems in healthcare, understanding the reasoning behind AI decisions is as critical as ensuring their performance, accuracy as well as patient safety and trust. Explainable AI (XAI) addresses these challenge by making AI reasoning transparent, allowing clinicians to interpret, validate, and trust model outputs. This paper reviews the application of XAI methods like SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradientweighted Class Activation Mapping (Grad-CAM) in improving the transparency of DL models for breast cancer detection. This paper explores advanced XAI strategies that balance accuracy with interpretability, including attention-based mechanisms and LLM-driven explanations. In particular we discuss LLMs embedded within XAI systems, act as translational interfaces, decoding complex model outputs into clinician-friendly explanations. By adapting technical explanations to the end user’s context and needs, LLMs enhance the accessibility and interpretability of complex model explanations. Collectively, these approaches help to bridge the gap between AI behavior and human understanding, ultimately improving transparency, trust and decision support especially in healthcare domain. |