Skip to yearly menu bar Skip to main content


Poster

Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise

Dan Hendrycks · Mantas Mazeika · Duncan Wilson · Kevin Gimpel

Room 210 #35

Keywords: [ Computer Vision ] [ Classification ] [ Natural Language Processing ] [ Algorithms ]


Abstract:

The growing importance of massive datasets with the advent of deep learning makes robustness to label noise a critical property for classifiers to have. Sources of label noise include automatic labeling for large datasets, non-expert labeling, and label corruption by data poisoning adversaries. In the latter case, corruptions may be arbitrarily bad, even so bad that a classifier predicts the wrong labels with high confidence. To protect against such sources of noise, we leverage the fact that a small set of clean labels is often easy to procure. We demonstrate that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and propose a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers. Across vision and natural language processing tasks, we experiment with various label noises at several strengths, and show that our method significantly outperforms existing methods.

Live content is unavailable. Log in and register to view live content