Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition

Borja Balle, James Bell, Aurélien Bellet, Kamalika Chaudhuri, Adria Gascon, Antti Honkela, Antti Koskela, Casey Meehan, Olga Ohrimenko, Mi Jung Park, Mariana Raykova, Mary Anne Smart, Yu-Xiang Wang, Adrian Weller

2020-12-11T00:00:00-08:00 - 2020-12-11T09:25:00-08:00
Abstract: This one day workshop focuses on privacy preserving techniques for machine learning and disclosure in large scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). There is growing interest from the Machine Learning (ML) community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches listed below. Additionally, given the tension between the adoption of machine learning technologies and ethical, technical and regulatory issues about privacy, as highlighted during the COVID-19 pandemic, we invite submissions for the special track on this topic.

Video

Chat

Chat is not available.

Schedule

2020-12-11T01:20:00-08:00 - 2020-12-11T01:30:00-08:00
Welcome & Introduction
2020-12-11T01:30:00-08:00 - 2020-12-11T02:00:00-08:00
Invited Talk #1: Reza Shokri (National University of Singapore)
Reza Shokri
2020-12-11T02:00:00-08:00 - 2020-12-11T02:30:00-08:00
Invited Talk #2: Katrina Ligett (Hebrew University)
Katrina Ligett
2020-12-11T02:30:00-08:00 - 2020-12-11T03:00:00-08:00
Invited Talk Q&A with Reza and Katrina
2020-12-11T03:00:00-08:00 - 2020-12-11T03:10:00-08:00
Break
2020-12-11T03:10:00-08:00 - 2020-12-11T03:25:00-08:00
Contributed Talk #1: POSEIDON: Privacy-Preserving Federated Neural Network Learning
Sinem Sav
2020-12-11T03:25:00-08:00 - 2020-12-11T03:30:00-08:00
Contributed Talk Q&A
2020-12-11T03:30:00-08:00 - 2020-12-11T05:00:00-08:00
Poster Session & Social on Gather.Town
2020-12-11T08:30:00-08:00 - 2020-12-11T08:40:00-08:00
Welcome & Introduction
2020-12-11T08:40:00-08:00 - 2020-12-11T09:00:00-08:00
Invited Talk #3: Carmela Troncoso (EPFL)
Carmela Troncoso
2020-12-11T09:00:00-08:00 - 2020-12-11T09:30:00-08:00
Invited Talk #4: Dan Boneh (Stanford University)
Dan Boneh
2020-12-11T09:30:00-08:00 - 2020-12-10T10:00:00-08:00
Invited Talk Q&A with Carmela and Dan
2020-12-11T10:00:00-08:00 - 2020-12-11T10:10:00-08:00
Break
2020-12-11T10:10:00-08:00 - 2020-12-11T11:10:00-08:00
Poster Session & Social on Gather.Town
2020-12-11T11:10:00-08:00 - 2020-12-11T11:20:00-08:00
Break
2020-12-11T11:20:00-08:00 - 2020-12-11T11:35:00-08:00
Contributed Talk #2: On the (Im)Possibility of Private Machine Learning through Instance Encoding
Nicholas Carlini
2020-12-11T11:35:00-08:00 - 2020-12-11T11:50:00-08:00
Contributed Talk #3: Poirot: Private Contact Summary Aggregation
Chenghong Wang
2020-12-11T11:50:00-08:00 - 2020-12-11T12:05:00-08:00
Contributed Talk #4: Greenwoods: A Practical Random Forest Framework for Privacy Preserving Training and Prediction
Harsh Chaudhari
2020-12-11T12:05:00-08:00 - 2020-12-11T12:20:00-08:00
Contributed Talks Q&A
2020-12-11T12:20:00-08:00 - 2020-12-11T12:25:00-08:00
Break
2020-12-11T12:25:00-08:00 - 2020-12-11T12:40:00-08:00
Contributed Talk #5: Shuffled Model of Federated Learning: Privacy, Accuracy, and Communication Trade-offs
Deepesh Data
2020-12-11T12:40:00-08:00 - 2020-12-11T12:55:00-08:00
Contributed Talk #6: Sample-efficient proper PAC learning with approximate differential privacy
Badih Ghazi
2020-12-11T12:55:00-08:00 - 2020-12-11T13:10:00-08:00
Contributed Talk #7: Training Production Language Models without Memorizing User Data
Swaroop Ramaswamy
2020-12-11T13:10:00-08:00 - 2020-12-11T13:25:00-08:00
Contributed Talks Q&A
Network Generation with Differential Privacy
Xu Zheng
Adversarial Attacks and Countermeasures on Private Training in MPC
Matthew Jagielski
Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling
Vitaly Feldman
Asymmetric Private Set Intersection with Applications to Contact Tracing and Private Vertical Federated Machine Learning
Bogdan Cebere
Individual Privacy Accounting via a Rényi Filter
Vitaly Feldman
Twinify: A software package for differentially private data release
Joonas Jälkö
Machine Learning with Membership Privacy via Knowledge Transfer
Virat Shejwalkar
Quantifying Privacy Leakage in Graph Embedding
Antoine Boutet
Privacy Amplification by Decentralization
Aurélien Bellet
Privacy Attacks on Machine Unlearning
Ji Gao
Characterizing Private Clipped Gradient Descent on Convex Generalized Linear Problems
Shuang Song
Optimal Client Sampling for Federated Learning
Samuel Horváth
Secure Medical Image Analysis with CrypTFlow
Javier Alvarez-Valle
Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties
Aurélien Bellet
On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians
Ishaq Aden-Ali
Privacy Risks in Embedded Deep Learning
Virat Shejwalkar
New Challenges for Fully Homomorphic Encryption
Marc Joye
Differentially Private Generative Models Through Optimal Transport
Karsten Kreis
SOTERIA: In Search of Efficient Neural Networks for Private Inference
Reza Shokri
Towards General-purpose Infrastructure for Protecting Scientific Data Under Study
Kritika Prakash
On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks
Salman Avestimehr
Multi-Headed Global Model for handling Non-IID data
Himanshu Arora
Privacy-preserving XGBoost Inference
Xianrui Meng
PrivAttack: A Membership Inference AttackFramework Against Deep Reinforcement LearningAgents
maziar gomrokchi
Effectiveness of MPC-friendly Softmax Replacement
Marcel Keller
SparkFHE: Distributed Dataflow Framework with Fully Homomorphic Encryption
Peizhao Hu
Does Domain Generalization Provide Inherent Membership Privacy
Divyat Mahajan
Privacy Preserving Chatbot Conversations
Debmalya Biswas
Dynamic Channel Pruning for Privacy
Abhishek Singh
Fairness in the Eyes of the Data: Certifying Machine-Learning Models
Carsten Baum
Differentially Private Bayesian Inference For GLMs
Joonas Jälkö
Challenges of Differentially Private Prediction in Healthcare Settings
Nicolas Papernot
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Robustness Threats of Differential Privacy
Ivan Oseledets
Dataset Inference: Ownership Resolution in Machine Learning
Nicolas Papernot
DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks
Théo JOURDAN
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
Nishat Koti
Randomness Beyond Noise: Differentially Private Optimization Improvement through Mixup
Hanshen Xiao
Enabling Fast Differentially Private SGD via Static Graph Compilation and Batch-Level Parallelism
Pranav Subramani
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead
James Bell
Local Differentially Private Regret Minimization in Reinforcement Learning
Evrard Garcelon
Data-oblivious training for XGBoost models
Chester Leung
DAMS: Meta-estimation of private sketch data structures for differentially private contact tracing
Praneeth Vepakomma
Robust and Private Learning of Halfspaces
Badih Ghazi
Data Appraisal Without Data Sharing
Mimee Xu
MP2ML: A Mixed-Protocol Machine LearningFramework for Private Inference
Fabian Boemer
Differentially private cross-silo federated learning
Mikko Heikkilä
Accuracy, Interpretability and Differential Privacy via Explainable Boosting
Harsha Nori
Differentially Private Stochastic Coordinate Descent
Celestine Mendler-Dünner
Privacy in Multi-armed Bandits: Fundamental Definitions and Lower Bounds on Regret
Debabrota Basu
Unifying Privacy Loss for Data Analytics
Ryan Rogers
Generative Adversarial User Privacy in Lossy Single-Server Information Retrieval
Mark Weng
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference
FatemehSadat Mireshghallah
Tight Approximate Differential Privacy for Discrete-Valued Mechanisms Using FFT
Antti Koskela
Mitigating Leakage in Federated Learning with Trusted Hardware
Javad Ghareh Chamani
Understanding Unintended Memorization in Federated Learning
Om Thakkar
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Shubho Sengupta
Privacy Regularization: Joint Privacy-UtilityOptimization in Language Models
FatemehSadat Mireshghallah