Skip to yearly menu bar Skip to main content


Oral Poster

SeeA*: Efficient Exploration-Enhanced A* Search by Selective Sampling

Dengwei Zhao · Shikui Tu · Lei Xu

West Ballroom A-D #6503
[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST
 
Oral presentation: Oral Session 2C: Reinforcement Learning
Wed 11 Dec 3:30 p.m. PST — 4:30 p.m. PST

Abstract: In recent years, neural network-guided heuristic search algorithms, such as Monte-Carlo tree search (MCTS) and A$^*$ search, have demonstrated remarkable performance across diverse practical applications. However, the effectiveness of these algorithms heavily relies on the quality of the guiding functions. The optimality of MCTS is not guaranteed when replacing actual rewards with heuristic estimations, and A$^*$ search may suffer from computational inefficiency in practice due to its limited exploration capability. In this paper, Sampling-exploration enhanced A$^*$ (SeeA$^*$) search is proposed to integrate exploration into A$^*$ search to improve problem-solving efficiency. A dynamic subset of open nodes is constructed through a selective sampling process and the node with the best heuristic value in this subset is expanded by SeeA$^*$. The open node with the globally best heuristic value, which is selected by A$^*$ search, may not be included in this subset, enabling SeeA$^*$ to explore other promising branches. Three sampling techniques are presented for comparative investigations. Furthermore, we theoretically establish the superior efficiency of SeeA$^*$ over A$^*$ search, particularly when the accuracy of guiding heuristics is insufficient. Experimental results on retrosynthetic planning in organic chemistry, logic synthesis in integrated circuit design, and the classical Sokoban game empirically demonstrate the efficiency of SeeA$^*$, with superior performance compared to state-of-the-art heuristic search algorithms.

Live content is unavailable. Log in and register to view live content