Timezone: »
Overfitting is the bane of data analysts, even when data are plentiful. Formal approaches to understanding this problem focus on statistical inference and generalization of individual analysis procedures. Yet the practice of data analysis is an inherently interactive and adaptive process: new analyses and hypotheses are proposed after seeing the results of previous ones, parameters are tuned on the basis of obtained results, and datasets are shared and reused. An investigation of this gap has recently been initiated by the authors in (Dwork et al., 2014), where we focused on the problem of estimating expectations of adaptively chosen functions.In this paper, we give a simple and practical method for reusing a holdout (or testing) set to validate the accuracy of hypotheses produced by a learning algorithm operating on a training set. Reusing a holdout set adaptively multiple times can easily lead to overfitting to the holdout set itself. We give an algorithm that enables the validation of a large number of adaptively chosen hypotheses, while provably avoiding overfitting. We illustrate the advantages of our algorithm over the standard use of the holdout set via a simple synthetic experiment.We also formalize and address the general problem of data reuse in adaptive data analysis. We show how the differential-privacy based approach in (Dwork et al., 2014) is applicable much more broadly to adaptive data analysis. We then show that a simple approach based on description length can also be used to give guarantees of statistical validity in adaptive settings. Finally, we demonstrate that these incomparable approaches can be unified via the notion of approximate max-information that we introduce. This, in particular, allows the preservation of statistical validity guarantees even when an analyst adaptively composes algorithms which have guarantees based on either of the two approaches.
Author Information
Cynthia Dwork (Microsoft Research)
Cynthia Dwork, Distinguished Scientist at Microsoft Research, is renowned for placing privacy-preserving data analysis on a mathematically rigorous foundation. A cornerstone of this work is differential privacy, a strong privacy guarantee frequently permitting highly accurate data analysis. Dr. Dwork has also made seminal contributions in cryptography and distributed computing, and is a recipient of the Edsger W. Dijkstra Prize, recognizing some of her earliest work establishing the pillars on which every fault-tolerant system has been built for decades. She is a member of the National Academy of Sciences and the National Academy of Engineering, and a Fellow of the American Academy of Arts and Sciences.
Vitaly Feldman (IBM Research - Almaden)
Moritz Hardt (Google)
Toni Pitassi (University of Toronto)
Omer Reingold (Samsung Research)
Aaron Roth (University of Pennsylvania)
More from the Same Authors
-
2020 : Individual Privacy Accounting via a Rényi Filter »
Vitaly Feldman -
2020 : Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling »
Vitaly Feldman -
2021 : Mean Estimation with User-level Privacy under Data Heterogeneity »
Rachel Cummings · Vitaly Feldman · Audra McMillan · Kunal Talwar -
2022 : Causal Inference out of Control: Identifying the Steerability of Consumption »
Gary Cheng · Moritz Hardt · Celestine Mendler-Dünner -
2022 : Differentially Private Gradient Boosting on Linear Learners for Tabular Data »
Saeyoung Rho · Shuai Tang · Sergul Aydore · Michael Kearns · Aaron Roth · Yu-Xiang Wang · Steven Wu · Cedric Archambeau -
2022 : Causal Inference out of Control: Identifying the Steerability of Consumption »
Gary Cheng · Moritz Hardt · Celestine Mendler-Dünner -
2022 Poster: Mean Estimation with User-level Privacy under Data Heterogeneity »
Rachel Cummings · Vitaly Feldman · Audra McMillan · Kunal Talwar -
2022 Poster: Online Minimax Multiobjective Optimization: Multicalibeating and Other Applications »
Daniel Lee · Georgy Noarov · Mallesh Pai · Aaron Roth -
2022 Poster: Practical Adversarial Multivalid Conformal Prediction »
Osbert Bastani · Varun Gupta · Christopher Jung · Georgy Noarov · Ramya Ramalingam · Aaron Roth -
2022 Poster: Private Synthetic Data for Multitask Learning and Marginal Queries »
Giuseppe Vietri · Cedric Archambeau · Sergul Aydore · William Brown · Michael Kearns · Aaron Roth · Ankit Siva · Shuai Tang · Steven Wu -
2022 Poster: Subspace Recovery from Heterogeneous Data with Non-isotropic Noise »
John Duchi · Vitaly Feldman · Lunjia Hu · Kunal Talwar -
2021 : Panel »
Oluwaseyi Feyisetan · Helen Nissenbaum · Aaron Roth · Christine Task -
2021 : Invited talk: Aaron Roth (UPenn / Amazon): Machine Unlearning. »
Aaron Roth -
2021 Poster: Adaptive Machine Unlearning »
Varun Gupta · Christopher Jung · Seth Neel · Aaron Roth · Saeed Sharifi-Malvajerdi · Chris Waites -
2021 Poster: Individual Privacy Accounting via a Rényi Filter »
Vitaly Feldman · Tijana Zrnic -
2020 Poster: What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation »
Vitaly Feldman · Chiyuan Zhang -
2020 Spotlight: What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation »
Vitaly Feldman · Chiyuan Zhang -
2020 Poster: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2020 Spotlight: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2019 : Aaron Roth, "Average Individual Fairness" »
Aaron Roth -
2019 : Private Stochastic Convex Optimization: Optimal Rates in Linear Time »
Vitaly Feldman · Tomer Koren · Kunal Talwar -
2019 : Poster Session »
Clement Canonne · Kwang-Sung Jun · Seth Neel · Di Wang · Giuseppe Vietri · Liwei Song · Jonathan Lebensold · Huanyu Zhang · Lovedeep Gondara · Ang Li · FatemehSadat Mireshghallah · Jinshuo Dong · Anand D Sarwate · Antti Koskela · Joonas Jälkö · Matt Kusner · Dingfan Chen · Mi Jung Park · Ashwin Machanavajjhala · Jayashree Kalpathy-Cramer · · Vitaly Feldman · Andrew Tomkins · Hai Phan · Hossein Esfandiari · Mimansa Jaiswal · Mrinank Sharma · Jeff Druce · Casey Meehan · Zhengli Zhao · Hsiang Hsu · Davis Railsback · Abraham Flaxman · · Julius Adebayo · Aleksandra Korolova · Jiaming Xu · Naoise Holohan · Samyadeep Basu · Matthew Joseph · My Thai · Xiaoqian Yang · Ellen Vitercik · Michael Hutchinson · Chenghong Wang · Gregory Yauney · Yuchao Tao · Chao Jin · Si Kai Lee · Audra McMillan · Rauf Izmailov · Jiayi Guo · Siddharth Swaroop · Tribhuvanesh Orekondy · Hadi Esmaeilzadeh · Kevin Procopio · Alkis Polyzotis · Jafar Mohammadi · Nitin Agrawal -
2019 : Invited talk #3 »
Aaron Roth -
2019 Poster: Private Stochastic Convex Optimization with Optimal Rates »
Raef Bassily · Vitaly Feldman · Kunal Talwar · Abhradeep Guha Thakurta -
2019 Spotlight: Private Stochastic Convex Optimization with Optimal Rates »
Raef Bassily · Vitaly Feldman · Kunal Talwar · Abhradeep Guha Thakurta -
2019 Poster: Locally Private Learning without Interaction Requires Separation »
Amit Daniely · Vitaly Feldman -
2018 : Contributed talk 1: Privacy Amplification by Iteration »
Vitaly Feldman -
2018 Poster: The Everlasting Database: Statistical Validity at a Fair Price »
Blake Woodworth · Vitaly Feldman · Saharon Rosset · Nati Srebro -
2018 Poster: Fairness Through Computationally-Bounded Awareness »
Michael Kim · Omer Reingold · Guy Rothblum -
2018 Poster: Generalization Bounds for Uniformly Stable Algorithms »
Vitaly Feldman · Jan Vondrak -
2018 Spotlight: Generalization Bounds for Uniformly Stable Algorithms »
Vitaly Feldman · Jan Vondrak -
2018 Poster: Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer »
David Madras · Toni Pitassi · Richard Zemel -
2017 : Contributed talk: Predict Responsibly: Increasing Fairness by Learning To Defer Abstract »
David Madras · Richard Zemel · Toni Pitassi -
2017 Poster: Avoiding Discrimination through Causal Reasoning »
Niki Kilbertus · Mateo Rojas Carulla · Giambattista Parascandolo · Moritz Hardt · Dominik Janzing · Bernhard Schölkopf -
2017 Poster: Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM »
Katrina Ligett · Seth Neel · Aaron Roth · Bo Waggoner · Steven Wu -
2016 : Vitaly Feldman »
Vitaly Feldman -
2016 Workshop: Adaptive Data Analysis »
Vitaly Feldman · Aaditya Ramdas · Aaron Roth · Adam Smith -
2016 Poster: Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back »
Vitaly Feldman -
2016 Oral: Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back »
Vitaly Feldman -
2016 Poster: Equality of Opportunity in Supervised Learning »
Moritz Hardt · Eric Price · Eric Price · Nati Srebro -
2015 Workshop: Adaptive Data Analysis »
Adam Smith · Aaron Roth · Vitaly Feldman · Moritz Hardt -
2015 Poster: Subsampled Power Iteration: a Unified Algorithm for Block Models and Planted CSP's »
Vitaly Feldman · Will Perkins · Santosh Vempala -
2015 Poster: Differentially Private Learning of Structured Discrete Distributions »
Ilias Diakonikolas · Moritz Hardt · Ludwig Schmidt -
2014 Workshop: Fairness, Accountability, and Transparency in Machine Learning »
Moritz Hardt · Solon Barocas -
2014 Invited Talk: Privacy in the Land of Plenty »
Cynthia Dwork -
2014 Poster: The Noisy Power Method: A Meta Algorithm with Applications »
Moritz Hardt · Eric Price -
2014 Spotlight: The Noisy Power Method: A Meta Algorithm with Applications »
Moritz Hardt · Eric Price -
2013 Poster: Statistical Active Learning Algorithms »
Maria-Florina F Balcan · Vitaly Feldman -
2013 Poster: On the Expressive Power of Restricted Boltzmann Machines »
James Martens · Arkadev Chattopadhya · Toni Pitassi · Richard Zemel