Timezone: »

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise
Hongxin Wei · Lue Tao · RENCHUNZI XIE · Bo An

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ None #None

Learning with noisy labels is a practically challenging problem in weakly supervised learning. In the existing literature, open-set noises are always considered to be poisonous for generalization, similar to closed-set noises. In this paper, we empirically show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels. Inspired by the observations, we propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training. With ODNL, the extra capacity of the neural network can be largely consumed in a way that does not interfere with learning patterns from clean data. Through the lens of SGD noise, we show that the noises induced by our method are random-direction, conflict-free and biased, which may help the model converge to a flat minimum with superior stability and enforce the model to produce conservative predictions on Out-of-Distribution instances. Extensive experimental results on benchmark datasets with various types of noisy labels demonstrate that the proposed method not only enhances the performance of many existing robust algorithms but also achieves significant improvement on Out-of-Distribution detection tasks even in the label noise setting.

Author Information

Hongxin Wei (Nanyang Technological University)
Lue Tao (Nanjing University of Aeronautics and Astronautics)
RENCHUNZI XIE (Nanyang Technological University)
Bo An (Nanyang Technological University)

More from the Same Authors