Poster
Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
Kaidi Cao · Colin Wei · Adrien Gaidon · Nikos Arechiga · Tengyu Ma
East Exhibition Hall B, C #151
Keywords: [ Applications ] [ Object Recognition ] [ Algorithms -> Classification; Algorithms -> Large Margin Methods; Algorithms -> Large Scale Learning; Algorithms ] [ Multitask a ]
Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains.
Live content is unavailable. Log in and register to view live content