Timezone: »
Having similar behavior at train-time and test-time---what we call a What You See Is What You Get (WYSIWYG)'' property---is desirable in machine learning. However, models trained with standard stochastic gradient descent (SGD) are known to not capture it. Their behaviors such as subgroup performance, or adversarial robustness can be very different during training and testing. We show that Differentially-Private (DP) training provably ensures the high-level WYSIWYG property, which we quantify using a notion of Distributional Generalization (DG). Applying this connection, we introduce new conceptual tools for designing deep-learning methods by reducing generalization concerns to optimization ones: to mitigate unwanted behavior at test time, it is provably sufficient to mitigate this behavior on the train datasets. By applying this novel design principle, which bypasses
pathologies'' of SGD, we construct simple algorithms that are competitive with SOTA in several distributional robustness applications, significantly improve the privacy vs. disparate impact tradeoff of DP-SGD, and mitigate robust overfitting in adversarial training. Finally, we also improve on known theoretical bounds relating DP, stability, and distributional generalization.
Author Information
Bogdan Kulynych (EPFL SPRING Lab)
PhD candidate in Computer Science at EPFL, Fellow at Harvard SEAS. B.Sc. from Kyiv Mohyla Academy in Ukraine. Formerly an intern at Google, CERN. I study privacy, security, reliability, and broader societal harms of algorithmic systems.
Yao-Yuan Yang (UC San Diego)
Yaodong Yu (UC Berkeley)
Jaroslaw Blasiok (Harvard)
Preetum Nakkiran (UC San Diego)
More from the Same Authors
-
2021 : On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging »
Chris Junchi Li · Yaodong Yu · Nicolas Loizou · Gauthier Gidel · Yi Ma · Nicolas Le Roux perso · Michael Jordan -
2021 : On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging »
Chris Junchi Li · Yaodong Yu · Nicolas Loizou · Gauthier Gidel · Yi Ma · Nicolas Le Roux perso · Michael Jordan -
2022 : Adversarial Robustness for Tabular Data through Cost and Utility Awareness »
Klim Kireev · Bogdan Kulynych · Carmela Troncoso -
2022 Poster: What You See is What You Get: Principled Deep Learning via Distributional Generalization »
Bogdan Kulynych · Yao-Yuan Yang · Yaodong Yu · Jarosław Błasiok · Preetum Nakkiran -
2018 : Accepted papers »
Sven Gowal · Bogdan Kulynych · Marius Mosbach · Nicholas Frosst · Phil Roth · Utku Ozbulak · Simral Chaudhary · Toshiki Shibahara · Salome Viljoen · Nikita Samarin · Briland Hitaj · Rohan Taori · Emanuel Moss · Melody Guan · Lukas Schott · Angus Galloway · Anna Golubeva · Xiaomeng Jin · Felix Kreuk · Akshayvarun Subramanya · Vipin Pillai · Hamed Pirsiavash · Giuseppe Ateniese · Ankita Kalra · Logan Engstrom · Anish Athalye -
2017 : Posters »
Shane Barratt · Alex Groce · Qi Yan · Sapan Agarwal · Fabian Offert · Bogdan Kulynych · Housam Khalifa Bashier Babiker · Petar Stojanov · Topi Paananen · Jose Marcio Luna · Gilmer Valdes · Jacqueline A Mauro · Daniel Chen · Baruch Schieber · Randolph Goebel · Jacob Bien