Tengyu Ma, "Designing Explicit Regularizers for Deep Models"
Tengyu Ma
2019 Invited Talk
in
Workshop: Machine Learning with Guarantees
in
Workshop: Machine Learning with Guarantees
Abstract
I will discuss some recent results on designing explicit regularizers to improve the generalization performances of deep neural networks. We derive data-dependent generalization bounds for deep neural networks. We empirically regularize the bounds and obtain improved generalization performance (in terms of the standard accuracy or the robust accuracy). I will also touch on recent results on applying these techniques to imbalanced datasets.
Based on joint work with Colin Wei, Kaidi Cao, Adrien Gaidon, and Nikos Arechiga
https://arxiv.org/abs/1910.04284 https://arxiv.org/abs/1906.07413 https://arxiv.org/abs/1905.03684
Chat is not available.
Successful Page Load