Timezone: »

 
Poster
Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization
Omar Montasser · Steve Hanneke · Nati Srebro

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #726
We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time. Interestingly, we find that this requires new algorithmic ideas and approaches to adversarially robust learning. In particular, we show, in a strong negative sense, the suboptimality of the robust learner proposed by Montasser, Hanneke, and Srebro [2019] and a broader family of learners we identify as local learners. Our results are enabled by adopting a global perspective, specifically, through a key technical contribution: the the global one-inclusion graph, which may be of independent interest, that generalizes the classical one-inclusion graph due to Haussler, Littlestone, and Warmuth [1994]. Finally, as a byproduct, we identify a dimension characterizing qualitatively and quantitatively what classes of predictors $\mathcal{H}$ are robustly learnable. This resolves an open problem due to Montasser et al. [2019], and closes a (potentially) infinite gap between the established upper and lower bounds on the sample complexity of adversarially robust learning.

Author Information

Omar Montasser (Toyota Technological Institute at Chicago)
Steve Hanneke (Toyota Technological Institute at Chicago)
Nati Srebro (TTI-Chicago)

More from the Same Authors