Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models

HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning

Shaunak Halbe · James S Smith · Junjiao Tian · Zsolt Kira

Keywords: [ robustness ] [ Prompt Tuning ] [ foundation models ] [ continual learning ] [ federated learning ]


Abstract:

In this paper, we focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts over time without sharing or storing any data. The complexity of this problem is compounded by challenges from both the Continual and Federated Learning perspectives. Specifically, models trained in a CFL setup suffer from catastrophic forgetting which is exacerbated by data heterogeneity across clients. Existing attempts at this problem tend to impose large overheads on clients and communication channels or require access to stored data which renders them unsuitable for real-world use due to privacy. We study this problem in the context of Foundation Models and showcase their effectiveness in mitigating forgetting while minimizing overhead costs and without requiring access to any stored data. We achieve this by leveraging a prompting based approach and proposing a novel and lightweight generation and distillation scheme to aggregate client models at the server. Our approach outperforms both existing methods and our own baselines by more than 7\% on challenging image-classification benchmarks while significantly reducing communication and client-level computation costs.

Chat is not available.