Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

A new Framework for Measuring Re-Identification Risk

CJ Carey · Travis Dick · Alessandro Epasto · Adel Javanmard · Josh Karlin · Shankar Kumar · Andres Munoz Medina · Vahab Mirrokni · Gabriel H. Nunes · Sergei Vassilvitskii · Peilin Zhong


Abstract:

Compact user representations (such as embeddings) form the backbone of personalization services. In this work, we present a new theoretical framework to measure re-identification risk in such user representations. Our framework, based on hypothesis testing, formally bounds the probability that an attacker may be able to obtain the identity of a user from their representation. As an application, we show how our framework is general enough to model important real-world applications such as the Chrome's Topics API for interest-based advertising. We complement our theoretical bounds by showing provably good attack algorithms for re-identification that we use to estimate the re-identification risk in the Topics API. We believe this work provides a rigorous and interpretable notion of re-identification risk and a framework to measure it that can be used to inform real-world applications.

Chat is not available.