Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Backdoors in Deep Learning: The Good, the Bad, and the Ugly

Analyzing And Editing Inner Mechanisms of Backdoored Language Models

Max Lamparth · Ann-Katrin Reuel

[ ] [ Project Page ]
Fri 15 Dec 1 p.m. PST — 1:45 p.m. PST

Abstract:

Poisoning of data sets is a potential security threat to large language models that can lead to backdoored models. A description of the internal mechanisms of backdoored language models and how they process trigger inputs, e.g., when switching to toxic language, has yet to be found. In this work, we study the internal representations of transformer-based backdoored language models and determine early-layer MLP modules as most important for the backdoor mechanism in combination with the initial embedding projection. We use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements that reduce the MLP module outputs to essentials for the backdoor mechanism. To this end, we introduce PCP ablation, where we replace transformer modules with low-rank matrices based on the principal components of their activations. We demonstrate our results on backdoored toy, backdoored large, and non-backdoored open-source models. We show that we can improve the backdoor robustness of large language models by locally constraining individual modules during fine-tuning on potentially poisonous data sets.Trigger warning: Offensive language.

Chat is not available.