Skip to yearly menu bar Skip to main content


Poster

Cross-Device Collaborative Test-Time Adaptation

Guohao Chen · Shuaicheng Niu · Deyu Chen · Shuhai Zhang · Changsheng Li · Yuanqing Li · Mingkui Tan

West Ballroom A-D #6702
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Deep models often struggle to generalize when test data come from a novel domain different from the training data. Although numerous test-time adaptation (TTA) approaches have sought to enhance the model robustness against potential domain shifts, they primarily focus on single-device adaptation, where each device adapts independently from scratch. This method is inefficient for multi-device domain adaptation, as it neglects the valuable knowledge learned from diverse domains by other devices. In this paper, we propose test-time Collaborative Lifelong Adaptation (CoLA), which is a general paradigm that can be incorporated with existing advanced TTA methods to boost the adaptation performance and efficiency in a multi-device collaborative manner. Specifically, we maintain and store a set of device-shared domain knowledge vectors, which accumulates the knowledge learned from all devices during their lifelong adaptation process. Based on this, CoLA conducts two collaboration strategies for devices with different computational resources and latency demands. 1) Knowledge reprogramming learning strategy jointly learns new domain-specific model parameters and a reweighting term to reprogram existing shared domain knowledge vectors, termed adaptation on principal agents. 2) Similarity-based knowledge aggregation strategy solely aggregates the knowledge stored in shared domain vectors according to domain similarities in an optimization-free manner, termed adaptation on follower agents. Experiments verify that CoLA is plug-and-play, which boosts the efficiency of TTA and demonstrates remarkable superiority in collaborative, lifelong, and single-domain TTA scenarios, \eg, on follower agents, we enhance accuracy by over 30\% on ImageNet-C while maintaining nearly the same efficiency as standard inference.

Live content is unavailable. Log in and register to view live content