Timezone: »

 
Poster
Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness
Francesco Pinto · Harry Yang · Ser Nam Lim · Philip Torr · Puneet Dokania

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #905

We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss. This simple change not only improves accuracy but also significantly improves the quality of the predictive uncertainty estimation of Mixup in most cases under various forms of covariate shifts and out-of-distribution detection experiments. In fact, we observe that Mixup otherwise yields much degraded performance on detecting out-of-distribution samples possibly, as we show empirically, due to its tendency to learn models exhibiting high-entropy throughout; making it difficult to differentiate in-distribution samples from out-of-distribution ones. To show the efficacy of our approach (RegMixup), we provide thorough analyses and experiments on vision datasets (ImageNet & CIFAR-10/100) and compare it with a suite of recent approaches for reliable uncertainty estimation.

Author Information

Francesco Pinto (University of Oxford)
Harry Yang (Facebook)
Ser Nam Lim (Facebook AI)
Philip Torr (University of Oxford)
Puneet Dokania (Five AI and University of Oxford)

More from the Same Authors