Timezone: »
Federated learning (FL) typically relies on synchronous training, which is slow due to stragglers. While asynchronous training handles stragglers efficiently, it does not ensure privacy due to the incompatibility with the secure aggregation protocols. A buffered asynchronous training protocol known as FedBuff has been proposed recently which bridges the gap between synchronous and asynchronous training to mitigate stragglers and to also ensure privacy simultaneously. FedBuff allows the users to send their updates asynchronously while ensuring privacy by storing the updates in a trusted execution environment (TEE) enabled private buffer. TEEs, however, have limited memory which limits the buffer size. Motivated by this limitation, we develop a buffered asynchronous secure aggregation (BASecAgg) protocol that does not rely on TEEs. The conventional secure aggregation protocols cannot be applied in the buffered asynchronous setting since the buffer may have local models corresponding to different rounds and hence the masks that the users use to protect their models may not cancel out. BASecAgg addresses this challenge by carefully designing the masks such that they cancel out even if they correspond to different rounds. Our convergence analysis and experiments show that BASecAgg almost has the same convergence guarantees as FedBuff without relying on TEEs.
Author Information
Jinhyun So (University of Southern California)
Ramy Ali (USC)
Basak Guler (University of California, Riverside)
Salman Avestimehr (University of Southern California)
More from the Same Authors
-
2020 : On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks »
Salman Avestimehr -
2021 : Basil: A Fast and Byzantine-Resilient Approach for Decentralized Training »
Ahmed Elkordy · Saurav Prakash · Salman Avestimehr -
2021 : FairFed: Enabling Group Fairness in Federated Learning »
Yahya Ezzeldin · Shen Yan · Chaoyang He · Emilio Ferrara · Salman Avestimehr -
2020 Poster: Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge »
Chaoyang He · Murali Annavaram · Salman Avestimehr -
2020 Poster: A Scalable Approach for Privacy-Preserving Collaborative Machine Learning »
Jinhyun So · Basak Guler · Salman Avestimehr -
2020 Poster: Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks »
Mohammadreza Mousavi Kalan · Zalan Fabian · Salman Avestimehr · Mahdi Soltanolkotabi -
2018 Poster: Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training »
Youjie Li · Mingchao Yu · Songze Li · Salman Avestimehr · Nam Sung Kim · Alex Schwing -
2018 Poster: GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training »
Mingchao Yu · Zhifeng Lin · Krishna Narra · Songze Li · Youjie Li · Nam Sung Kim · Alex Schwing · Murali Annavaram · Salman Avestimehr -
2017 Poster: Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication »
Qian Yu · Mohammad Maddah-Ali · Salman Avestimehr