Abstract [eng] |
Semantic image segmentation is the most fine-grained task in computer vision. Its applications range from autonomous vehicles to medical imaging. Despite its deployments in critical areas, interpretable image segmentation remains an underexplored field, especially when compared to explainable AI (XAI) solutions in classification and object detection. Even less attention has been paid to the use of XAI in non-explainability-related scenarios, where XAI methods are applied not for interpretability per se, but rather for other instrumental reasons, such as improving a model’s performance. Such use cases can potentially extend to AI safety, specifically in the case of adversarial attacks, self-supervised learning, neural architecture search (NAS), and continual learning (CL). Most of these areas have never been investigated in the context of interpretable segmentation. This work outlines key developments in the field of interpretable image segmentation, with a particular focus on XAI-driven model improvements. We also consider potential uses of interpretable image segmentation for model compression in the case of NAS, and instance-based memory compression in the case of CL. |