Timezone: »
Recently, adversarial erasing for weakly-supervised object attention has been deeply studied due to its capability in localizing integral object regions. However, such a strategy raises one key problem that attention regions will gradually expand to non-object regions as training iterations continue, which significantly decreases the quality of the produced attention maps. To tackle such an issue as well as promote the quality of object attention, we introduce a simple yet effective Self-Erasing Network (SeeNet) to prohibit attentions from spreading to unexpected background regions. In particular, SeeNet leverages two self-erasing strategies to encourage networks to use reliable object and background cues for learning to attention. In this way, integral object regions can be effectively highlighted without including much more background regions. To test the quality of the generated attention maps, we employ the mined object regions as heuristic cues for learning semantic segmentation models. Experiments on Pascal VOC well demonstrate the superiority of our SeeNet over other state-of-the-art methods.
Author Information
Qibin Hou (Nankai University)
PengTao Jiang (Nankai University)
Yunchao Wei (UIUC)
Ming-Ming Cheng (Nankai University)
More from the Same Authors
-
2022 Poster: SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation »
Meng-Hao Guo · Cheng-Ze Lu · Qibin Hou · Zhengning Liu · Ming-Ming Cheng · Shi-min Hu -
2022 Spotlight: Mask Matching Transformer for Few-Shot Segmentation »
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi -
2022 Spotlight: SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation »
Meng-Hao Guo · Cheng-Ze Lu · Qibin Hou · Zhengning Liu · Ming-Ming Cheng · Shi-min Hu -
2022 Poster: Mask Matching Transformer for Few-Shot Segmentation »
siyu jiao · Gengwei Zhang · Shant Navasardyan · Ling Chen · Yao Zhao · Yunchao Wei · Humphrey Shi -
2021 Poster: Few-Shot Segmentation via Cycle-Consistent Transformer »
Gengwei Zhang · Guoliang Kang · Yi Yang · Yunchao Wei -
2021 Poster: Associating Objects with Transformers for Video Object Segmentation »
Zongxin Yang · Yunchao Wei · Yi Yang -
2021 Poster: All Tokens Matter: Token Labeling for Training Better Vision Transformers »
Zi-Hang Jiang · Qibin Hou · Li Yuan · Daquan Zhou · Yujun Shi · Xiaojie Jin · Anran Wang · Jiashi Feng -
2020 Poster: Consistent Structural Relation Learning for Zero-Shot Segmentation »
Peike Li · Yunchao Wei · Yi Yang -
2020 Spotlight: Consistent Structural Relation Learning for Zero-Shot Segmentation »
Peike Li · Yunchao Wei · Yi Yang -
2020 Poster: ICNet: Intra-saliency Correlation Network for Co-Saliency Detection »
Wen-Da Jin · Jun Xu · Ming-Ming Cheng · Yi Zhang · Wei Guo -
2020 Poster: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2020 Oral: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2019 Poster: Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video »
Jiawang Bian · Zhichao Li · Naiyan Wang · Huangying Zhan · Chunhua Shen · Ming-Ming Cheng · Ian Reid