Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Decision Making

Large Language Models as Commonsense Knowledge for Large-Scale Task Planning

Zirui Zhao · Wee Sun Lee · David Hsu


Abstract:

Real-world environments often have large-scale domains that make classical planning intractable. Large language models (LLMs) have been used as few-shot planning policies due to their commonsense knowledge for solving daily problems. However, LLMs' potential for planning complex tasks is not fully tapped. This paper shows that LLMs can be used as both the commonsense world model and the heuristic policy in search algorithms such as Monte Carlo Tree Search (MCTS). LLM's world model provides a commonsense prior belief of states for MCTS to achieve reasoned decision-making efficiently. The LLM's heuristic policy guides the search to relevant parts of the tree, substantially reducing the search complexity. We demonstrate the effectiveness of our method in daily task-planning experiments and highlight its advantages over using LLMs solely as policies.

Chat is not available.