Skip to yearly menu bar Skip to main content

Workshop: Foundation Models for Decision Making

Selective Perception: Learning Concise State Descriptions for Language Model Actors

Kolby T Nottingham · Yasaman Razeghi · Kyungmin Kim · JB Lanier · Pierre Baldi · Roy Fox · Sameer Singh

[ ] [ Project Page ]
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST


It is increasingly common for large language models (LLMs) to be applied as actors in sequential decision making problems in embodied domains such as robotics and games, due to their general world knowledge and planning abilities. However, LLMs are not natively trained for embodied decision making problems, and expressing complex state spaces in text is non-trivial. Exhaustively describing high-dimensional states leads to prohibitive inference costs and impaired task performance due to distracting or irrelevant information. Previous LLM actors avoid the issue by relying on hand-engineered, task-specific protocols to determine which features to communicate about a state and which to leave out. In this work, we propose BLINDER (Brief Language INputs for DEcision-making Responses), a method for learning to select concise and helpful sets of state features for LLM actors. BLINDER learns a value function for task-conditioned state descriptions that approximates the likelihood that a state description will result in optimal actions. We evaluate BLINDER on the challenging video game NetHack and a real-world robotic manipulation task. Our method improves task success rate by 77% and 14% on NetHack and robotic manipulation respectively, reduces model input length by 83%, and generalizes well to LLM actors of various size and quality.

Chat is not available.