Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition
Borja Balle, James Bell, Aurélien Bellet, Kamalika Chaudhuri, Adria Gascon, Antti Honkela, Antti Koskela, Casey Meehan, Olga Ohrimenko, Mi Jung Park, Mariana Raykova, Mary Anne Smart, Yu-Xiang Wang, Adrian Weller
Fri, Dec 11th, 2020 @ 08:00 – 17:25 GMT
Abstract: This one day workshop focuses on privacy preserving techniques for machine learning and disclosure in large scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). There is growing interest from the Machine Learning (ML) community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches listed below. Additionally, given the tension between the adoption of machine learning technologies and ethical, technical and regulatory issues about privacy, as highlighted during the COVID-19 pandemic, we invite submissions for the special track on this topic.
Video
Chat
Chat is not available.
Schedule
09:20 – 09:30 GMT
Welcome & Introduction
09:30 – 10:00 GMT
Invited Talk #1: Reza Shokri (National University of Singapore)
Reza Shokri
10:30 – 11:00 GMT
Invited Talk Q&A with Reza and Katrina
11:00 – 11:10 GMT
Break
11:10 – 11:25 GMT
Contributed Talk #1: POSEIDON: Privacy-Preserving Federated Neural Network Learning
Sinem Sav
11:25 – 11:30 GMT
Contributed Talk Q&A
11:30 – 13:00 GMT
Poster Session & Social on Gather.Town
16:30 – 16:40 GMT
Welcome & Introduction
Fri, Dec 11th @ 17:30 GMT – Thu, Dec 10th @ 18:00 GMT
Invited Talk Q&A with Carmela and Dan
18:00 – 18:10 GMT
Break
18:10 – 19:10 GMT
Poster Session & Social on Gather.Town
19:10 – 19:20 GMT
Break
19:20 – 19:35 GMT
Contributed Talk #2: On the (Im)Possibility of Private Machine Learning through Instance Encoding
Nicholas Carlini
19:35 – 19:50 GMT
Contributed Talk #3: Poirot: Private Contact Summary Aggregation
Chenghong Wang
19:50 – 20:05 GMT
Contributed Talk #4: Greenwoods: A Practical Random Forest Framework for Privacy Preserving Training and Prediction
Harsh Chaudhari
20:05 – 20:20 GMT
Contributed Talks Q&A
20:20 – 20:25 GMT
Break
20:25 – 20:40 GMT
Contributed Talk #5: Shuffled Model of Federated Learning: Privacy, Accuracy, and Communication Trade-offs
Deepesh Data
20:40 – 20:55 GMT
Contributed Talk #6: Sample-efficient proper PAC learning with approximate differential privacy
Badih Ghazi
20:55 – 21:10 GMT
Contributed Talk #7: Training Production Language Models without Memorizing User Data
Swaroop Ramaswamy
21:10 – 21:25 GMT
Contributed Talks Q&A
Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling
Vitaly Feldman
Asymmetric Private Set Intersection with Applications to Contact Tracing and Private Vertical Federated Machine Learning
Bogdan Cebere
Characterizing Private Clipped Gradient Descent on Convex Generalized Linear Problems
Shuang Song
Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties
Aurélien Bellet
On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians
Ishaq Aden-Ali
Towards General-purpose Infrastructure for Protecting Scientific Data Under Study
Kritika Prakash
On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks
Salman Avestimehr
PrivAttack: A Membership Inference AttackFramework Against Deep Reinforcement LearningAgents
maziar gomrokchi
DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks
Théo JOURDAN
Randomness Beyond Noise: Differentially Private Optimization Improvement through Mixup
Hanshen Xiao
Enabling Fast Differentially Private SGD via Static Graph Compilation and Batch-Level Parallelism
Pranav Subramani
Local Differentially Private Regret Minimization in Reinforcement Learning
Evrard Garcelon
DAMS: Meta-estimation of private sketch data structures for differentially private contact tracing
Praneeth Vepakomma
Accuracy, Interpretability and Differential Privacy via Explainable Boosting
Harsha Nori
Privacy in Multi-armed Bandits: Fundamental Definitions and Lower Bounds on Regret
Debabrota Basu
Generative Adversarial User Privacy in Lossy Single-Server Information Retrieval
Mark Weng
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference
FatemehSadat Mireshghallah
Tight Approximate Differential Privacy for Discrete-Valued Mechanisms Using FFT
Antti Koskela
Privacy Regularization: Joint Privacy-UtilityOptimization in Language Models
FatemehSadat Mireshghallah