Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Backdoors in Deep Learning: The Good, the Bad, and the Ugly

BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models

Zhen Xiang · Fengqing Jiang · Zidi Xiong · Bhaskar Ramasubramanian · Radha Poovendran · Bo Li

[ ] [ Project Page ]
Fri 15 Dec 2 p.m. PST — 2:15 p.m. PST

Abstract:

Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting, particularly when tackling tasks that require systematic reasoning processes. On the other hand, COT prompting also poses new vulnerabilities in the form of backdoor attacks, wherein the model will output unintended malicious content under specific backdoor-triggered conditions during inference. In this paper, we propose BadChain, the first backdoor attack against LLMs employing COT prompting, which does not require access to the training dataset or model parameters. These advantages allow BadChain to be launched against commercial LLMs operated via API-only access and impose low computational overhead since BadChain does not need any model fine-tuning. BadChain leverages the inherent reasoning capabilities of LLMs by inserting a backdoor reasoning step into the sequence of reasoning steps of the model output, thereby altering the final response when a backdoor trigger is embedded in the query prompt. In particular, a subset of demonstrations will be manipulated to incorporate the backdoor reasoning step in COT prompting. Consequently, given any query prompt containing the backdoor trigger, the LLM will be misled to output unintended content. Empirically, we show the effectiveness of BadChain against four LLMs (Llama2, GPT-3.5, PaLM2, and GPT-4) on six complex benchmark tasks encompassing arithmetic, commonsense, and symbolic reasoning, compared with the ineffectiveness of the baseline backdoor attacks designed for simpler tasks such as semantic classification. Moreover, we demonstrate the interpretability of BadChain by showing that the relationship between the trigger and the backdoor reasoning step can be well-explained based on the output of the backdoored model. Finally, we propose two defenses based on shuffling and demonstrate their overall ineffectiveness against BadChain. Therefore, BadChain remains a severe threat to LLMs, underscoring the urgency for the development of effective future defenses.

Chat is not available.