Abstract [eng] |
Explainable AI (XAI) has become increasingly important in computer vision applications. While substantial progress has been made in explainable image classification, XAI in semantic segmentation remains underexplored despite its critical role in healthcare, autonomous systems, and other high-stakes domains. Given the widespread use of image segmentation, a systematic investigation of its explainability is needed. This dissertation bridges this gap by focusing on post-hoc interpretability in semantic segmentation and adversarial attack scenarios. It proposes and investigates three explainability method extensions: occlusion-based, activation perturbation-based, and gradient-based approaches, all specifically designed for segmentation tasks. These methods are assessed for their trade-offs between explanation noisiness and computational efficiency. The applications of post-hoc techniques are further evaluated in adversarial attack scenarios, demonstrating that semantic segmentation explainability techniques can be successfully attacked to generate arbitrary explanations. Key contributions also include a first survey of explainability techniques, not limiting itself to a particular type of explainability method or its application domain, a comprehensive taxonomy of XAI methods in segmentation, and insights into the broader implications of explainability in high-stakes applications. |