Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Privacy in Machine Learning (PriML) 2021

A Novel Self-Distillation Architecture to Defeat Membership Inference Attacks

Xinyu Tang · Saeed Mahloujifar · Liwei Song · Virat Shejwalkar · Amir Houmansadr · Prateek Mittal


Abstract:

Membership inference attacks are a key measure to evaluate privacy leakage in machine learning (ML) models, which aim to distinguish training members from non-members by exploiting differential behavior of the models on member and non-member inputs. We propose a new framework to train privacy-preserving models that induces similar behavior on member and non-member inputs to mitigate practical membership inference attacks. Our framework, called SELENA, has two major components. The first component and the core of our defense, called Split-AI, is a novel ensemble architecture for training. We prove that our Split-AI architecture defends against a large family of membership inference attacks, however, it is susceptible to new adaptive attacks. Therefore, we use a second component in our framework called Self-Distillation to protect against such stronger attacks, which (self-)distills the training dataset through our Split-AI ensemble and has no reliance on external public datasets. We perform extensive experiments on major benchmark datasets and the results show that our approach achieves a better trade-off between membership privacy and utility compared to previous defenses.

Chat is not available.