Skip to yearly menu bar Skip to main content


Poster

Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness

Shiyun Lin · Yuze Han · Xiang Li · Zhihua Zhang

Keywords: [ communication efficiency ] [ robustness ] [ Fairness ] [ personalized federated learning ] [ infimal convolution ] [ low-dimensional projection ]


Abstract:

Personalized Federated Learning faces many challenges such as expensive communication costs, training-time adversarial attacks, and performance unfairness across devices. Recent developments witness a trade-off between a reference model and local models to achieve personalization. We follow the avenue and propose a personalized FL method towards the three goals. When it is time to communicate, our method projects local models into a shared-and-fixed low-dimensional random subspace and uses infimal convolution to control the deviation between the reference model and projected local models. We theoretically show our method converges for smooth objectives with square regularizers and the convergence dependence on the projection dimension is mild. We also illustrate the benefits of robustness and fairness on a class of linear problems. Finally, we conduct a large number of experiments to show the empirical superiority of our method over several state-of-the-art methods on the three aspects.

Chat is not available.