Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning: Recent Advances and New Challenges

Client-Private Secure Aggregation for Privacy-Preserving Federated Learning

Parker Newton · Olivia Choudhury · Bill Horne · Vidya Ravipati · Divya Bhargavi · Ujjwal Ratan


Abstract:

Privacy-preserving federated learning (PPFL) is a paradigm of distributed privacy-preserving machine learning training in which a set of clients jointly compute a shared global model under the orchestration of an aggregation server. The system has the property that no party learns any information about any client's training data, besides what could be inferred from the global model. The core cryptographic component of a PPFL scheme is the secure aggregation protocol, a secure multi-party computation protocol in which the server securely aggregates the clients' locally trained models, and sends the aggregated model to the clients. However, in many applications the global model represents a trade secret of the consortium of clients, which they may not wish to reveal in the clear to the server. In this work, we propose a novel model of secure aggregation, called client-private secure aggregation, in which the server computes an encrypted global model that only the clients can decrypt. We provide an explicit construction of a client-private secure aggregation protocol, as well as a theoretical and empirical evaluation of our construction to demonstrate its practicality. Our experiments demonstrate that the client and server running time of our protocol are less than 19 s and 2 s, respectively, when scaled to support 250 clients.

Chat is not available.