Skip to yearly menu bar Skip to main content


Poster

FedFed: Feature Distillation against Data Heterogeneity in Federated Learning

Zhiqin Yang · Yonggang Zhang · Yu Zheng · Xinmei Tian · Hao Peng · Tongliang Liu · Bo Han

Great Hall & Hall B1+B2 (level 1) #1616
[ ] [ Project Page ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

Federated learning (FL) typically faces data heterogeneity, i.e., distribution shifting among clients. Sharing clients' information has shown great potentiality in mitigating data heterogeneity, yet incurs a dilemma in preserving privacy and promoting model performance. To alleviate the dilemma, we raise a fundamental question: Is it possible to share partial features in the data to tackle data heterogeneity?In this work, we give an affirmative answer to this question by proposing a novel approach called Federated Feature distillation (FedFed).Specifically, FedFed partitions data into performance-sensitive features (i.e., greatly contributing to model performance) and performance-robust features (i.e., limitedly contributing to model performance).The performance-sensitive features are globally shared to mitigate data heterogeneity, while the performance-robust features are kept locally.FedFed enables clients to train models over local and shared data. Comprehensive experiments demonstrate the efficacy of FedFed in promoting model performance.

Chat is not available.