Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Privacy in Machine Learning (PriML) 2021

Membership Inference Attacks Against NLP Classification Models

Virat Shejwalkar · Huseyin A Inan · Amir Houmansadr · Robert Sim


Abstract:

The success of natural language processing (NLP) is making NLP applications commonplace. Unfortunately, recent research has shown that privacy might be at stake given that these models are often trained on private user data. While privacy risks are demonstrated in text generation settings, privacy risks of the text classification settings, which subsume myriad downstream applications, are largely unexplored. In this work, we study the susceptibility of NLP classification models, used for text classification tasks, to membership inference (MI), which is a fundamental type of privacy leakage. We design a comprehensive suite of attacks to assess the risk of sample-level MI, as well as that of relatively unexplored user-level MI. We introduce novel user-level MI attacks that outperform the existing attacks and conduct experiments on Transformer-based and RNN-based NLP models. Our evaluations show that user-level MI is significantly stronger than sample-level MI. We further perform in-depth analyses showing the effect of various NLP-specific parameters on MI against NLP classification models.

Chat is not available.