Timezone: »
Understanding to what extent neural networks memorize training data is an intriguing question with practical and theoretical implications. In this paper we show that in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.To the best of our knowledge, our results are the first to show that reconstructing a large portion of the actual training samples from a trained neural network classifier is generally possible.This has negative implications on privacy, as it can be used as an attack for revealing sensitive training data. We demonstrate our method for binary MLP classifiers on a few standard computer vision datasets.
Author Information
Niv Haim (Weizmann Institute of Science)
Gal Vardi (TTI-Chicago)
Gilad Yehudai (Weizmann Institute of Technology)
Ohad Shamir (Weizmann Institute of Science)
Michal Irani (Weizmann Institute of Science)
More from the Same Authors
-
2021 Spotlight: Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems »
Itay Safran · Ohad Shamir -
2022 : On Convexity and Linear Mode Connectivity in Neural Networks »
David Yunis · Kumar Kshitij Patel · Pedro Savarese · Gal Vardi · Jonathan Frankle · Matthew Walter · Karen Livescu · Michael Maire -
2022 : On the Complexity of Finding Small Subgradients in Nonsmooth Optimization »
Guy Kornowski · Ohad Shamir -
2022 : On the Complexity of Finding Small Subgradients in Nonsmooth Optimization »
Guy Kornowski · Ohad Shamir -
2022 Panel: Panel 1C-2: Reconstructing Training Data… & On Optimal Learning… »
Gal Vardi · Idan Mehalel -
2022 Poster: On Margin Maximization in Linear and ReLU Networks »
Gal Vardi · Ohad Shamir · Nati Srebro -
2022 Poster: The Sample Complexity of One-Hidden-Layer Neural Networks »
Gal Vardi · Ohad Shamir · Nati Srebro -
2022 Poster: On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias »
Itay Safran · Gal Vardi · Jason Lee -
2022 Poster: Gradient Methods Provably Converge to Non-Robust Networks »
Gal Vardi · Gilad Yehudai · Ohad Shamir -
2021 Poster: Learning a Single Neuron with Bias Using Gradient Descent »
Gal Vardi · Gilad Yehudai · Ohad Shamir -
2021 Poster: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir -
2021 Poster: A Stochastic Newton Algorithm for Distributed Convex Optimization »
Brian Bullins · Kshitij Patel · Ohad Shamir · Nathan Srebro · Blake Woodworth -
2021 Oral: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir -
2021 Poster: Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems »
Itay Safran · Ohad Shamir -
2020 : Poster Session 1 (gather.town) »
Laurent Condat · Tiffany Vlaar · Ohad Shamir · Mohammadi Zaki · Zhize Li · Guan-Horng Liu · Samuel Horváth · Mher Safaryan · Yoni Choukroun · Kumar Shridhar · Nabil Kahale · Jikai Jin · Pratik Kumar Jawanpuria · Gaurav Kumar Yadav · Kazuki Koyama · Junyoung Kim · Xiao Li · Saugata Purkayastha · Adil Salim · Dighanchal Banerjee · Peter Richtarik · Lakshman Mahto · Tian Ye · Bamdev Mishra · Huikang Liu · Jiajie Zhu -
2020 : Contributed talks in Session 1 (Zoom) »
Sebastian Stich · Laurent Condat · Zhize Li · Ohad Shamir · Tiffany Vlaar · Mohammadi Zaki -
2020 : Contributed Video: Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?, Ohad Shamir »
Ohad Shamir -
2020 Poster: Neural Networks with Small Weights and Depth-Separation Barriers »
Gal Vardi · Ohad Shamir -
2020 Poster: Hardness of Learning Neural Networks with Natural Weights »
Amit Daniely · Gal Vardi -
2019 Poster: Blind Super-Resolution Kernel Estimation using an Internal-GAN »
Sefi Bell-Kligler · Assaf Shocher · Michal Irani -
2019 Oral: Blind Super-Resolution Kernel Estimation using an Internal-GAN »
Sefi Bell-Kligler · Assaf Shocher · Michal Irani -
2019 Poster: From voxels to pixels and back: Self-supervision in natural-image reconstruction from fMRI »
Roman Beliy · Guy Gaziv · Assaf Hoogi · Francesca Strappini · Tal Golan · Michal Irani -
2019 Poster: On the Power and Limitations of Random Features for Understanding Neural Networks »
Gilad Yehudai · Ohad Shamir -
2019 Poster: Controlling Neural Level Sets »
Matan Atzmon · Niv Haim · Lior Yariv · Ofer Israelov · Haggai Maron · Yaron Lipman -
2018 Poster: Are ResNets Provably Better than Linear Predictors? »
Ohad Shamir -
2018 Poster: Global Non-convex Optimization with Discretized Diffusions »
Murat Erdogdu · Lester Mackey · Ohad Shamir -
2016 Poster: Dimension-Free Iteration Complexity of Finite Sum Optimization Problems »
Yossi Arjevani · Ohad Shamir -
2016 Poster: Without-Replacement Sampling for Stochastic Gradient Methods »
Ohad Shamir -
2016 Oral: Without-Replacement Sampling for Stochastic Gradient Methods »
Ohad Shamir -
2015 Poster: Communication Complexity of Distributed Convex Learning and Optimization »
Yossi Arjevani · Ohad Shamir -
2014 Poster: Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation »
Ohad Shamir -
2014 Poster: On the Computational Efficiency of Training Neural Networks »
Roi Livni · Shai Shalev-Shwartz · Ohad Shamir -
2013 Poster: Online Learning with Switching Costs and Other Adaptive Adversaries »
Nicolò Cesa-Bianchi · Ofer Dekel · Ohad Shamir -
2006 Poster: Similarity by Composition »
Oren Boiman · Michal Irani