Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Decision Making

Planning With Large Language Models Via Corrective Re-Prompting

Shreyas Sundara Raman · Vanya Cohen · Eric Rosen · Ifrah Idrees · David Paulius · Stefanie Tellex


Abstract:

Extracting knowledge from Large Language Models (LLM) offers a path to designing intelligent, embodied agents that takes advantage of the common sense knowledge present in large language datasets. Related works have queried LLMs with a wide-range of contextual information, such as goals, sensor observations and scene descriptions, to generate high-level action plans for a specific task. In this work, we propose a prompting-based strategy for extracting executable plans from a LLM that leverages a novel and readily-accessible source of information: precondition errors. Our approach assumes that actions are only afforded execution in certain contexts (i.e. implicit preconditions must be met for an action to execute), and that the embodied agent has the ability to determine if the action is not executable in the current context (e.g: a precondition error is present). When an agent is unable to execute an action in a plan, our approach re-prompts the LLM with precondition error information to extract a useful and executable action to achieve the intended goal in the current context. We evaluate our approach in the VirtualHome simulation environment on 88 different tasks and 7 scenes. We evaluate different prompt templates and compare to methods that naively re-sample actions from the LLM. We find that our approach using precondition errors improves the executability and semantic correctness of plans, while also reducing the number of corrective re-prompts for querying actions.

Chat is not available.