Skip to yearly menu bar Skip to main content


Poster

Truth is Universal: Robust Detection of Lies in LLMs

Lennart Bürger · Fred Hamprecht · Boaz Nadler

East Exhibit Hall A-C #2911
[ ] [ Project Page ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Large Language Models (LLMs) have revolutionised natural language processing, exhibiting impressive human-like capabilities. In particular, LLMs are capable of "lying", knowingly outputting false statements. Hence, it is of interest and importance to develop methods for LLM lie detection. Indeed, several authors trained classifiers to detect LLM lies based on their internal model activations. However, Levinstein and Herrmann [2023] showed that these classifiers may fail to generalise, for example to negated statements. In this work, we aim to develop a robust method to detect when an LLM is lying. To this end, we make the following key contributions: (i) We demonstrate the existence of a two-dimensional subspace, along which the activation vectors of true and false statements can be linearly separated. This subspace contains one truth direction which is general, and holds for a wide variety of statements, and another direction which separates true/false statements depending on their polarity. Notably, this finding is universal and holds for various LLMs, including Gemma-7B, LLaMA2-13B and LLaMA3-8B. Our analysis explains the generalisation failures observed in previous studies and sets the stage for more robust lie detection; (ii) We develop an accurate LLM lie detector by learning the general truth direction. The corresponding classifier can detect simple lies with 95\% accuracy and more complex real-world lies with 82\% accuracy, outperforming a previous approach by 12\% (simple) and 11\% (real-world), respectively.

Live content is unavailable. Log in and register to view live content