Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Privacy in Machine Learning (PriML) 2021

Unsupervised Membership Inference Attacks Against Machine Learning Models

YUEFENG PENG


Abstract:

As a form of privacy leakage for machine learning (ML), membership inference (MI) attacks aim to infer whether given data samples have been used to train a target ML model. Existing state-of-the-art MI attacks in black-box settings adopt a so-called shadow model to perform transfer attacks. Such attacks achieve high inference accuracy but have many adversarial assumptions, such as having a dataset from the same distribution as the target model’s training data and knowledge of the target model structure. We propose a novel MI attack, called UMIA, which probes the target model in an unsupervised way without any shadow model. We relax all the adversarial assumptions above, demonstrating that MI attacks are applicable without any knowledge about the target model and its training set. We empirically show that, with far fewer adversarial assumptions and computational resources, UMIA can perform on bar with the state-of-the-art supervised MI attack.

Chat is not available.