Timezone: »

Boundary thickness and robustness in learning models
Yaoqing Yang · Rajiv Khanna · Yaodong Yu · Amir Gholami · Kurt Keutzer · Joseph Gonzalez · Kannan Ramchandran · Michael W Mahoney

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1819

Robustness of machine learning models to various adversarial and non-adversarial corruptions continues to be of interest. In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness. Thick decision boundaries lead to improved performance, while thin decision boundaries lead to overfitting (e.g., measured by the robust generalization gap between training and testing) and lower robustness. We show that a thicker boundary helps improve robustness against adversarial examples (e.g., improving the robust test accuracy of adversarial training) as well as so-called out-of-distribution (OOD) transforms, and we show that many commonly-used regularization and data augmentation procedures can increase boundary thickness. On the theoretical side, we establish that maximizing boundary thickness during training is akin to the so-called mixup training. Using these observations, we show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms. We can also show that the performance improvement in several lines of recent work happens in conjunction with a thicker boundary.

Author Information

Yaoqing Yang (UC Berkeley)
Rajiv Khanna (University of California, Berkeley)
Yaodong Yu (University of California, Berkeley)
Amir Gholami (University of California, Berkeley)
Kurt Keutzer (EECS, UC Berkeley)
Joseph Gonzalez (UC Berkeley)
Kannan Ramchandran (UC Berkeley)
Michael W Mahoney (UC Berkeley)

More from the Same Authors