Tutorial
PrivacyML: Meaningful Privacy-Preserving Machine Learning and How To Evaluate AI Privacy
Mimee Xu · Dmitrii Usynin · Fazl Barez
West Meeting Room 109, 110
In the world of large model development, model details and training data are increasingly closed down, pushing privacy to the forefront of machine learning – how do we protect privacy of the data used to train the model, permitting more widespread data sharing collaborations? How will individuals trust these technologies with their data? How do we verify that the integration of individual’s data is both useful to the rest of the participating federation, and, more importantly - safe for the data owner? How do the regulations integrate into this complex infrastructure?
These open questions require a multitude of considerations between the incentives of model development, the data owning parties, and the overseeing agencies. Many cryptographic solutions target these incentives problems, but are they covering all essential components of trustworthy data sharing? Are they practical, or likely to be practical soon?
In this tutorial, we attempt to answer questions regarding specific capabilities of privacy technologies in three parts: 1. overarching incentive issues with respect to data and evaluations, 2. Where cryptographic and optimisation solutions can help; for evaluations, we delve deep into secure computation and machine unlearning. 3. Cultural, societal, and research agendas relating to practically implementing these technologies.
Our website is here: https://privacyml.github.io/
We hope that, by identifying the boundaries of the use of privacy technologies, and providing a technical and structured framework for reasoning over these issues, we could empower the general audience to integrate these principles (and practical solutions) into their existing research. Those already interested in applying the technology can gain a deeper, hands-on understanding of implementation useful for modeling and developing incentive-compatible solutions for their own work.
Live content is unavailable. Log in and register to view live content