Timezone: »

 
Workshop
Privacy Preserving Machine Learning
Adria Gascon · Aurélien Bellet · Niki Kilbertus · Olga Ohrimenko · Mariana Raykova · Adrian Weller

Sat Dec 08 05:00 AM -- 03:30 PM (PST) @ Room 512 CDGH
Event URL: https://ppml-workshop.github.io/ppml/ »

Website

Description

This one day workshop focuses on privacy preserving techniques for training, inference, and disclosure in large scale data analysis, both in the distributed and centralized settings. We have observed increasing interest of the ML community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches, including:

- secure multi-party computation techniques for ML
- homomorphic encryption techniques for ML
- hardware-based approaches to privacy preserving ML
- centralized and decentralized protocols for learning on encrypted data
- differential privacy: theory, applications, and implementations
- statistical notions of privacy including relaxations of differential privacy
- empirical and theoretical comparisons between different notions of privacy
- trade-offs between privacy and utility

We think it will be very valuable to have a forum to unify different perspectives and start a discussion about the relative merits of each approach. The workshop will also serve as a venue for networking people from different communities interested in this problem, and hopefully foster fruitful long-term collaboration.

Sat 5:30 a.m. - 5:50 a.m. [iCal]
Welcom and introduction (Introduction)
Sat 5:50 a.m. - 6:40 a.m. [iCal]
Invited talk 1: Scalable PATE and the Secret Sharer (Talk)
Ian Goodfellow
Sat 6:40 a.m. - 7:30 a.m. [iCal]

At the mid eighties researchers in computational learning theory pointed the way to examples of hard learning tasks such as learning parity with noise (LPN) and learning with errors (LWE) which have been extremely useful for building sophisticated cryptographic primitives such as homomorphic encryption which are unbreakable if LPN and LWE are hard to learn.

Today, with the rise of machine learning algorithms that use large amounts of data to come up with procedures which have the potential to replace human decision processes, cryptography, in turn, stands to provide machine learning, tools for keeping data private during both training and inference phases of ML and to provide methods to verify adherence of models with data. These promise to be important steps in ensuring the safe transition of power from human to algorithmic decision making.

Shafi Goldwasser
Sat 7:30 a.m. - 8:00 a.m. [iCal]
Coffee Break 1 (Break)
Sat 8:00 a.m. - 8:15 a.m. [iCal]

Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees.

We describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent. For example, we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization. In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.

Vitaly Feldman
Sat 8:15 a.m. - 8:30 a.m. [iCal]

We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms. Specifically, we provide a tight upper bound on the Renyi Differential Privacy (RDP) parameters for algorithms that: (1) subsample the dataset, and then (2) applies a randomized mechanism M to the subsample, in terms of the RDP parameters of M and the subsampling probability parameter. Our results generalize the moments accounting technique, developed by Abadi et al. [CCS'16] for the Gaussian mechanism, to any subsampled RDP mechanism.

Yu-Xiang Wang
Sat 8:30 a.m. - 8:45 a.m. [iCal]

In this work we explore the power of the hybrid model of differential privacy (DP) proposed in~\cite{blender}, where some users desire the guarantees of the local model of DP and others are content with receiving the trusted curator model guarantees. In particular, we study the accuracy of mean estimation algorithms for arbitrary distributions in bounded support. We show that a hybrid mechanism which combines the sample mean estimates obtained from the two groups in an optimally weighted convex combination performs a constant factor better for a wide range of sample sizes than natural benchmarks. We analyze how this improvement factor is parameterized by the problem setting and how it varies with sample size.

Yatharth A Dubey
Sat 8:45 a.m. - 9:00 a.m. [iCal]
Sensitive statistics are often collected across sets of users, with repeated collection of reports done over time. For example, trends in users' private preferences or software usage may be monitored via such reports. We study the collection of such statistics in the local differential privacy (LDP) model, when each user's value may change only a small number of times. We describe an algorithm whose privacy cost is polylogarithmic in the number of times the statistic is collected. More fundamentally---by building on anonymity of the users' reports---we also demonstrate how the privacy cost of our LDP algorithm can actually be much lower when viewed in the central model of differential privacy. We show, via a new and general technique, that any permutation-invariant algorithm satisfying $\varepsilon$-local differential privacy will satisfy $(O(\varepsilon \sqrt{\log \frac 1 \delta} / \sqrt{n}), \delta)$-central differential privacy. In the process, we clarify how the high noise and $\sqrt{n}$ overhead of LDP protocols is a consequence of them being significantly more private in the central model. As a final, practical corollary, our results also imply that industrial deployments of LDP mechanism may have much lower privacy cost than their advertised $\varepsilon$ would indicate---at least if reports are anonymized.
Úlfar Erlingsson
Sat 9:00 a.m. - 10:30 a.m. [iCal]
Lunch Break (Break)
Sat 10:30 a.m. - 11:20 a.m. [iCal]

Much data analysis today is done on sensitive data, and particular privacy challenges arise when this data is sensitive and structured. In this talk I will describe two such challenges in the privacy-preserving analysis of complex, structured data that we have been working on in my group.

The first is continual release of graph statistics in an online manner from an expanding graph, which is motivated by a problem in HIV epidemiology. Even though node differentially private release of graph statistics is highly challenging, here we will describe how we can get a differentially private solution for this problem that performs better than the natural sequential composition baseline.

Next, I will talk about analysis of sensitive structured, correlated data, while still preserving the privacy of events in the data. Differential privacy does not adequately address privacy issues in this kind of data, and hence will look at a form of inferential privacy, called Pufferfish, that is more appropriate. We will provide mechanisms, establish their composition properties, and finally evaluate them on real data on physical activity measurements across time.

Kamalika Chaudhuri
Sat 11:20 a.m. - 12:10 p.m. [iCal]

This talk will present a partial survey of the proposed (and implemented) models for private analysis of distributed data, together with some new results.

Adam Smith
Sat 12:10 p.m. - 12:30 p.m. [iCal]
Coffee Break 2 (Break)
Sat 12:30 p.m. - 12:45 p.m. [iCal]

Developing a differentially private deep learning algorithm is challenging, due to the difficulty in analyzing the sensitivity of objective functions that are typically used to train deep neural networks. Many existing methods resort to the stochastic gradient descent algorithm and apply a pre-defined sensitivity to the gradients for privatizing weights. However, their slow convergence typically yields a high cumulative privacy loss. Here, we take a different route by employing the method of auxiliary coordinates, which allows us to independently update the weights per layer by optimizing a per-layer objective function. This objective function can be well approximated by a low-order Taylor's expansion, in which sensitivity analysis becomes tractable. We perturb the coefficients of the expansion for privacy, which we optimize using more advanced optimization routines than SGD for faster convergence. We empirically show that our algorithm provides a decent trained model quality under a modest privacy budget.

Frederik Harder
Sat 12:45 p.m. - 1:00 p.m. [iCal]

As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by executing DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For two canonical DNNs, VGG16 and MobileNet, we obtain 20× and 6× increases in throughput for verifiable inference, and 11× and 4× for verifiable and private inference.

Florian Tramer
Sat 1:00 p.m. - 1:15 p.m. [iCal]

We study the problem of discrete distribution testing in the two-party setting. For example, in the standard closeness testing problem, Alice and Bob each have t samples from, respectively, distributions a and b over [n], and they need to test whether a=b or a,b are ε-far (in the ℓ1 distance) for some fixed ε>0. This is in contrast to the well-studied one-party case, where the tester has unrestricted access to samples of both distributions, for which optimal bounds are known for a number of variations. Despite being a natural constraint in applications, the two-party setting has evaded attention so far.

We address two fundamental aspects of the two-party setting: 1) what is the communication complexity, and 2) can it be accomplished securely, without Alice and Bob learning extra information about each other's input. Besides closeness testing, we also study the independence testing problem, where Alice and Bob have t samples from distributions a and b respectively, which may be correlated; the question is whether a,b are independent of ε-far from being independent.

Our contribution is three-fold:

∙ Communication: we show how to gain communication efficiency as we have more samples, beyond the information-theoretic bound on t. Furthermore, the gain is polynomially better than what one may obtain by adapting one-party algorithms.

For the closeness testing, our protocol has communication s=Θε(n^2/t^2) as long as t is at least the information-theoretic minimum number of samples. For the independence testing over domain [n]×[m], where n≥m, we obtain s=Oε(n^2m/t^2+nm/t+m^1/2).

∙ Security: we define the concept of secure distribution testing and argue that it must leak at least some minimal information when the promise is not satisfied. We then provide secure versions of the above protocols with an overhead that is only polynomial in the security parameter.

∙ Lower bounds: we prove tightness of our trade-off for the closeness testing, as well as that the independence testing requires tight Ω(m^1/2) communication for unbounded number of samples. These lower bounds are of independent interest as, to the best of our knowledge, these are the first 2-party communication lower bounds for testing problems, where the inputs represent a set of i.i.d. samples.

Negev Shekel Nosatzki
Sat 1:15 p.m. - 1:30 p.m. [iCal]

We present a framework for experimenting with secure multi-party computation directly in TensorFlow. By doing so we benefit from several properties valuable to both researchers and practitioners, including tight integration with ordinary machine learning processes, existing optimizations for distributed computation in TensorFlow, high-level abstractions for expressing complex algorithms and protocols, and an expanded set of familiar tooling. We give an open source implementation of a state-of-the-art protocol and report on concrete benchmarks using typical models from private machine learning.

Morten Dahl
Sat 1:30 p.m. - 2:00 p.m. [iCal]
  1. [Cynthia Dwork and Vitaly Feldman] Privacy-preserving Prediction (#05)
  2. [Garrett Bernstein and Daniel Sheldon] Differentially Private Bayesian Inference for Exponential Families (#03)
  3. [Di Wang, Adam Smith and Jinhui Xu] High Dimensional Sparse Linear Regression under Local Differential Privacy: Power and Limitations (#11)
  4. [Ashwin Machanavajjhala and Kamalika Chaudhuri] Capacity Bounded Differential Privacy (#25)
  5. [Aurélien Bellet, Rachid Guerraoui and Hadrien Hendrikx] Who started this gossip? Differentially private rumor spreading (#26)
  6. [Antti Koskela and Antti Honkela] Learning rate adaptation for differentially private stochastic gradient descent (#38)
  7. [Kareem Amin, Travis Dick, Alex Kulesza, Andres Medina and Sergei Vassilvitskii] Private Covariance Estimation via Iterative Eigenvector Sampling (#45)
  8. [Kareem Amin, Alex Kulesza, Andres Munoz Medina and Sergei Vassilvitskii] Bias Variance Trade-off in Differential Privacy (#53)
  9. [Nicolas Loizou, Peter Richtarik, Filip Hanzely, Jakub Konecny and Dmitry Grishchenko] A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion (#57)
  10. [Brendan McMahan and Galen Andrew] A General Approach to Adding Differential Privacy to Iterative Training Procedures (#62)
  11. [Da Yu, Huishuai Zhang and Wei Chen] Improving the Gradient Perturbation Approach for Differentially Private Optimization (#70)
  12. [Alexandra Schofield, Aaron Schein, Zhiwei Steven Wu and Hanna Wallach] A Variational Inference Approach for Locally PrivateInference of Poisson Factorization Models (#63)
  13. [Judy Hoffman, Mehryar Mohri and Ningshan Zhang] Algorithms and Theory for Multiple-Source Adaptation (#52)
  14. [Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues and Guillermo Sapiro] Learning Representations for Utility and Privacy: An Information-Theoretic Based Approach (#15)
  15. [Fabrice Benhamouda and Marc Joye] How to Profile Privacy-Conscious Users in Recommender Systems (#32)
  16. [Koen Lennart van der Veen, Ruben Seggers, Peter Bloem and Giorgio Patrini] Three Tools for Practical Differential Privacy (#29)
  17. [Vasyl Pihur, Aleksandra Korolova, Frederick Liu, Subhash Sankuratripati, Moti Yung, Dachuan Huang and Ruogu Zeng] Differentially Private "Draw and Discard" Machine Learning (#60)
  18. [Hsin-Pai Cheng, Patrick Yu, Haojing Hu, Feng Yan, Shiyu Li, Hai Li and Yiran Chen] LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning (#01)
  19. [Joshua Allen, Bolin Ding, Janardhan Kulkarni, Harsha Nori, Olga Ohrimenko and Sergey Yekhanin] An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors (#12)
  20. [Antoine Boutet, Théo Jourdan and Carole Frindel] Toward privacy in IoT mobile devices for activity recognition (#13)
  21. [Roshan Dathathri, Olli Saarikivi, Hao Chen, Kim Laine, Kristin Lauter, Saeed Maleki, Madanlal Musuvathi and Todd Mytkowicz] CHET: Compiler and Runtime for Homomorphic Evaluation of Tensor Programs (#14)
  22. [Theo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Mancuso, Daniel Rueckert and Jonathan Passerat-Palmbach] A generic framework for privacy preserving deep learning (#43)
  23. [Valerie Chen, Valerio Pastro and Mariana Raykova] Secure Computation for Machine Learning With SPDZ (#44)
  24. [Phillipp Schoppmann, Adria Gascon, Mariana Raykova and Benny Pinkas] Make Some ROOM for the Zeros: Data Sparsity in Secure Distributed Machine Learning (#51)
  25. [Siddharth Garg, Zahra Ghodsi, Carmit Hazay, Yuval Ishai, Antonio Mercedone and Muthuramakrishnan Venkitasubramaniam] Oursourcing Private Machine Learning via Lightweight Secure Arithmetic Computation (#66)
  26. [Hao Chen, Ilaria Chillotti, Oxana Poburinnaya, Ilya Razenshteyn and M. Sadegh Riazi] Scaling Up Secure Nearest Neighbor Search (#33)
  27. [Yunhui Long, Vincent Bindschaedler and Carl Gunter] Towards Measuring Membership Privacy (#09) [video]
Sat 2:15 p.m. - 3:15 p.m. [iCal]
Poster Session
Phillipp Schoppmann, Patrick Yu, Valerie Chen, Travis Dick, Marc Joye, Ningshan Zhang, Frederik Harder, Olli Saarikivi, Théo Ryffel, Yunhui Long, Théo JOURDAN, Di Wang, Antonio Marcedone, Negev Shekel Nosatzki, Yatharth A Dubey, Antti Koskela, Peter Bloem, Aleksandra Korolova, Martin Bertran, Hao Chen, Galen Andrew, Natalia L Martinez, Jana Kulkarni, Jonathan Passerat-Palmbach, Guillermo Sapiro, Amrita Roy Chowdhury
Sat 3:15 p.m. - 3:30 p.m. [iCal]
Wrap up

Author Information

Adria Gascon (Alan Turing Institute and Warwick university)
Aurélien Bellet (INRIA)
Niki Kilbertus (MPI Tuebingen & Cambridge)
Olga Ohrimenko (Microsoft Research)
Mariana Raykova (Yale University)
Adrian Weller (University of Cambridge)

Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, where he is also a Turing Fellow leading work on safe and ethical AI. He is a Senior Research Fellow in Machine Learning at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he leads the project on Trust and Transparency. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards including the Centre for Data Ethics and Innovation. Previously, Adrian held senior roles in finance.

More from the Same Authors