Timezone: »
Neural networks are prone to be biased towards spurious correlations between classes and latent attributes exhibited in a major portion of training data, which ruins their generalization capability. We propose a new method for training debiased classifiers with no spurious attribute label. The key idea is to employ a committee of classifiers as an auxiliary module that identifies bias-conflicting data, i.e., data without spurious correlation, and assigns large weights to them when training the main classifier. The committee is learned as a bootstrapped ensemble so that a majority of its classifiers are biased as well as being diverse, and intentionally fail to predict classes of bias-conflicting data accordingly. The consensus within the committee on prediction difficulty thus provides a reliable cue for identifying and weighting bias-conflicting data. Moreover, the committee is also trained with knowledge transferred from the main classifier so that it gradually becomes debiased along with the main classifier and emphasizes more difficult data as training progresses. On five real-world datasets, our method outperforms prior arts using no spurious attribute label like ours and even surpasses those relying on bias labels occasionally. Our code is available at https://github.com/nayeong-v-kim/LWBC.
Author Information
Nayeong Kim (POSTECH)
SEHYUN HWANG (postech)
Sungsoo Ahn (POSTECH)
Jaesik Park (POSTECH)
Suha Kwak (POSTECH)
More from the Same Authors
-
2020 : Combinatorial 3D Shape Generation via Sequential Assembly »
Jungtaek Kim · Hyunsoo Chung · Jinhwi Lee · Minsu Cho · Jaesik Park -
2022 : Substructure-Atom Cross Attention for Molecular Representation Learning »
Jiye Kim · Seungbeom Lee · Dongwoo Kim · Sungsoo Ahn · Jaesik Park -
2022 : SeLCA: Self-Supervised Learning of Canonical Axis »
Seungwook Kim · Yoonwoo Jeong · Chunghyun Park · Jaesik Park · Minsu Cho -
2022 : A Closer Look at the Intervention Procedure of Concept Bottleneck Models »
Sungbin Shin · Yohan Jo · Sungsoo Ahn · Namhoon Lee -
2022 Poster: PeRFception: Perception using Radiance Fields »
Yoonwoo Jeong · Seungjoo Shin · Junha Lee · Chris Choy · Anima Anandkumar · Minsu Cho · Jaesik Park -
2022 Poster: A Rotated Hyperbolic Wrapped Normal Distribution for Hierarchical Representation Learning »
Seunghyuk Cho · Juyong Lee · Jaesik Park · Dongwoo Kim -
2021 Poster: Brick-by-Brick: Combinatorial Construction with Deep Reinforcement Learning »
Hyunsoo Chung · Jungtaek Kim · Boris Knyazev · Jinhwi Lee · Graham Taylor · Jaesik Park · Minsu Cho -
2021 Poster: Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training »
Minguk Kang · Woohyeon Shim · Minsu Cho · Jaesik Park -
2021 Poster: Relational Self-Attention: What's Missing in Attention for Video Understanding »
Manjin Kim · Heeseung Kwon · CHUNYU WANG · Suha Kwak · Minsu Cho -
2020 Poster: ContraGAN: Contrastive Learning for Conditional Image Generation »
Minguk Kang · Jaesik Park