Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

You Still See Me: How Data Protection Supports the Architecture of ML Surveillance

Rui-Jie Yew · Lucy Qin · Suresh Venkatasubramanian


Abstract:

Data (as well as computation) is key to the functionality of ML systems. Data protection has therefore become a focal point of policy proposals and existing laws that are pertinent to the governance of ML systems. Privacy laws and legal scholarship have long emphasized privacy responsibilities developers have to protect individual data subjects. As a consequence, technical methods for privacy-preservation have been touted as solutions to prevent intrusions to individual data in the development of ML systems while preserving their resulting functionality. Further, privacy-preserving machine learning (PPML) has been offered up as a way to address the tension between being "seen" and "mis-seen" - to build models that can be fair, accurate, and conservative in data use. However, a myopic focus on privacy-preserving machine learning obscures broader privacy harms facilitated by ML models. In this paper, we argue that the use of PPML techniques to "un-see" data subjects introduces privacy costs of a fundamentally different nature. Your data may not be used in its raw or "personal" form, but models built from that data still make predictions and influence you and people like you. Moreover, PPML has allowed data collectors to excavate crevices of data that no one could touch before. We illustrate these privacy costs with an example on targeted advertising and models built with private set intersection.

Chat is not available.