Semantic Probabilistic Control of Language Models
Abstract
Semantic control entails steering LM generations towards satisfying subtle non-lexicalconstraints, e.g., toxicity, sentiment, or politeness, attributes thatcan be captured by a sequence-level verifier.It can thus be viewed as sampling from the LM distribution conditioned on the targetattribute, a computationally intractable problem due to the non-decomposable natureof the verifier.Existing approaches to LM control either only deal with syntactic constraints whichcannot capture the aforementioned attributes, or rely on sampling to explore theconditional LM distribution, an ineffective estimator for low-probability events.In this work, we leverage a verifier's gradient information to efficiently reasonover all generations that satisfy the target attribute, enabling precisesteering of LM generations by reweighing the next-token distribution.Starting from an initial sample, we create a local LM distribution favoring semanticallysimilar sentences.This approximation enables the tractable computation of an expected sentence embedding.We use this expected embedding, informed by the verifier's evaluation at the initial sample, to estimate the probability of satisfying the constraint, which directly informs the update to the next-token distribution.We evaluated our approach on the tasks of controlling the toxicity, sentiment, and topic-adherence of LMs yielding generations satisfying the constraint with high probability without degrading their quality.