Timezone: »
Recently, there has been a growing interest in developing saliency methods that provide visual explanations of network predictions. Still, the usability of existing methods is limited to image classification models. To overcome this limitation, we extend the existing approaches to generate grid saliencies, which provide spatially coherent visual explanations for (pixel-level) dense prediction networks. As the proposed grid saliency allows to spatially disentangle the object and its context, we specifically explore its potential to produce context explanations for semantic segmentation networks, discovering which context most influences the class predictions inside a target object area. We investigate the effectiveness of grid saliency on a synthetic dataset with an artificially induced bias between objects and their context as well as on the real-world Cityscapes dataset using state-of-the-art segmentation networks. Our results show that grid saliency can be successfully used to provide easily interpretable context explanations and, moreover, can be employed for detecting and localizing contextual biases present in the data.
Author Information
Lukas Hoyer (Bosch Center for Artificial Intelligence)
Mauricio Munoz (Bosch Center for Artificial Intelligence)
Prateek Katiyar (Bosch Center for Artificial Intelligence)
Anna Khoreva (Bosch Center for Artificial Intelligence)
Volker Fischer (Robert Bosch GmbH, Bosch Center for Artificial Intelligence)
More from the Same Authors
-
2020 Poster: SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks »
Fabian Fuchs · Daniel E Worrall · Volker Fischer · Max Welling -
2019 Poster: Progressive Augmentation of GANs »
Dan Zhang · Anna Khoreva -
2018 Poster: The streaming rollout of deep networks - towards fully model-parallel execution »
Volker Fischer · Jan Koehler · Thomas Pfeil