Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Decision Making

Elicitation Inference Optimization for Multi-Principal-Agent Alignment

Andrew Konya · Yeping L Qiu · Michael Varga · Aviv Ovadya


Abstract: In multi-principal agent alignment scenarios spanning governance, markets, diplomacy, and AGI, it is unfeasible to elicit every principal's view on all perspectives relevant to agent decisions. Elicitation inference optimization (EIO) aims to minimize the $n$ elicitations needed to approximate $N$ principal's views across $K$ perspectives. In this work, we demonstrate an EIO approach where data efficiency ($NK/n$) increases with scale. We introduce STUMP: an elicitation inference model which integrates an LLM with a latent factor model to enable learning transfer across samples, contexts, and languages. Then, we characterize STUMP's performance on a set of elicitation primitives from which scalable elicitation (sampling) protocols can be constructed. Building from these results, we design and demonstrate two scalable elicitation protocols for STUMP where data efficiency grows boundlessly, scaling like $O(n)$ in the number of elicitations $n$. This makes it possible to obtain complex, high-dimensional preference signals spanning principal populations at any scale.

Chat is not available.