Timezone: »

Architecture Personalization in Resource-constrained Federated Learning
Mi Luo · Fei Chen · Zhenguo Li · Jiashi Feng

Federated learning aims to collaboratively train a global model across a set of clients without data sharing among them. In most earlier studies, a global model architecture, either predefined by experts or searched automatically, is applied to all the clients. However, this convention is impractical for two reasons: 1) The clients may have heterogeneous resource constraints and only be able to handle models with particular configurations, imposing high requirements on the model’s versatility; 2) Data in the real-world federated system are highly non-IID, which means a model architecture optimized for all clients may not achieve optimal performance on personalized data on individual clients. In this work, we address the above two issues by proposing a novel framework that automatically discovers personalized model architectures tailored for clients’ specific resource constraints and data, called Architecture Personalization Federated Learning (APFL). APFL first trains a sizable global architecture and slims it adaptively to meet computational budgets on edge devices. Then, APFL offers a communication-efficient federated partial aggregation (FedPA) algorithm to allow mutual learning among clients with diverse local architectures, which largely boosts the overall performance. Extensive empirical evaluations on three federated datasets clearly demonstrate that APFL provides affordable and personalized architectures for individual clients, costing fewer communication bytes and achieving higher accuracy compared with manually defined architectures under the same resource budgets.

Author Information

Mi Luo (National University of Singapore)
Fei Chen (Huawei Noah's Ark Lab)
Zhenguo Li (Noah's Ark Lab, Huawei Tech Investment Co Ltd)
Jiashi Feng (UC Berkeley)

More from the Same Authors