Keywords [eng] |
Interpretable methods, deep learning, digital histopathology, medical imaging field, GradCAM, Integrated Gradients, LIME, NoiseTunnel. |
Abstract [eng] |
Digital histopathology enables the investigation of whole slide images (WSI) in a digital environment and is currently employed in medical trials, research, telemedicine, and education. This technology, when combined with deep learning models, holds the potential to make significant strides in diagnostic processes, leading to more accurate treatment plans and enhanced diagnostic precision. Despite its promise, the adoption of this methodology in real medical practice is limited. Deep neural networks face challenges, with one of the most critical being the lack of model interpretation, creating a "black box" issue. Typically, deep learning models do not provide insights into how decisions are made, raising concerns about transparency, correctness, and potential biases. This research investigates the interpretation methods to increase the transparency of convolutional neural networks for tissue classification in histopathology images. Four interpretation methods - Integrated Gradients, Integrated Gradients with Noise Tunnel, LIME, and GradCAM - were examined. The obtained results were quantitively evaluated. The GradCAM method outperformed other methods by providing the most detailed and higher quality outputs than the other techniques. Through experimental analysis, it was determined that interpretation map quality is not influenced by classification outcomes. |