Skip to yearly menu bar Skip to main content


Poster

Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models

Lu Yu · Haiyang Zhang · Changsheng Xu


Abstract:

Due to the impressive zero-shot capabilities, pre-trained vision-language models (e.g. CLIP), have attracted widespread attention and adoption across various domains. Nonetheless, CLIP has been observed to be susceptible to adversarial examples. Through experimental analysis, we have observed a phenomenon wherein adversarial perturbations induce shifts in text-guided attention. Building upon this observation, we propose a simple yet effective strategy: \textit{Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR)}. This framework incorporates two components: the Attention Refinement module and the Attention-based Model Constraint module. Our goal is to enhance both the generalization and robustness of the CLIP model: The Attention Refinement module aligns the text-guided attention obtained from the target model via adversarial examples with the text-guided attention acquired from the original model via clean examples. This alignment enhances the model's robustness. Additionally, the Attention-based Model Constraint module acquires text-guided attention from both the target and original models using clean examples. Its objective is to maintain model performance on clean samples while enhancing overall robustness. The experiments validate that our method yields a 9.45\% enhancement in zero-shot robust accuracy over the current state-of-the-art techniques across 16 datasets.

Live content is unavailable. Log in and register to view live content