Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generalization in Planning (GenPlan '23)

Learning Discrete Models for Classical Planning Problems

Forest Agostinelli · Misagh Soltani

Keywords: [ Planning ] [ Deep Neural Networks ] [ learned models ] [ heuristic search ]


Abstract:

For many sequential decision making domains, planning is often a necessary to solve problems. However, for domains such as those encountered in robotics, the transition function, also known as the model, is often unknown and coding such a model by hand is often impractical. While planning could be done with a model trained from observed transitions, such approaches are limited by errors accumulating when the model is applied across many timesteps as well as the inability to reidentify states. Furthermore, even given an accurate model, domain-independent planning methods may not be able to reliably solve problems while domain-specific information, such as informative heuristics, may not be available. While domain-independent methods exist that can learn domain-specific heuristic functions, such as DeepCubeA, these methods may assume a pre-determined goal. To solve these problems, we introduce DeepCubeAI, a domain-independent algorithm that learns a model that operates in a discrete latent space, learns a heuristic function that generalizes over start and goal states using this learned model, and combines the learned model and learned heuristic function with search to solve problems. Since the latent space is discrete, we can prevent the accumulation of small errors by rounding and we can reidentify states by simply comparing two binary vectors. In our experiments on a pixel representation of the Rubik's cube and Sokoban, we find that DeepCubeAI is able to apply the model for thousands of steps without accumulating any error. Furthermore, DeepCubeAI solves over 99% of test instances in all domains and generalizes across goal states.

Chat is not available.