Skip to yearly menu bar Skip to main content


Poster

Mixture of Adversarial LoRAs: Boosting Robust Generalization in Meta-tuning

Xu Yang · Chen Liu · Ying Wei


Abstract:

This paper introduces AMT, an \textbf{A}dversarial \textbf{M}eta-\textbf{T}uning methodology, to boost the robust generalization of pre-trained models in the out-of-domain (OOD) few-shot learning. To address the challenge of transferring knowledge from source domains to unseen target domains, we construct the robust LoRAPool by meta-tuning LoRAs with double perturbations on both inputs and singular values and vectors at varying robustness levels. On top of that, we introduce a simple yet effective test-time merging mechanism for adaptively merging discriminative LoRAs for test-time task customization. Extensive evaluations demonstrate that the AMT brings substantial improvements over previous state-of-the-art methods across a range of OOD few-shot image classification tasks on three benchmarks, confirming the effectiveness of our approach to boost the robust generalization of pre-trained models.

Live content is unavailable. Log in and register to view live content