Timezone: »

Adaptive Online Packing-guided Search for POMDPs
Chenyang Wu · Guoyu Yang · Zongzhang Zhang · Yang Yu · Dong Li · Wulong Liu · Jianye Hao

Wed Dec 08 12:30 AM -- 02:00 AM (PST) @
The partially observable Markov decision process (POMDP) provides a general framework for modeling an agent's decision process with state uncertainty, and online planning plays a pivotal role in solving it. A belief is a distribution of states representing state uncertainty. Methods for large-scale POMDP problems rely on the same idea of sampling both states and observations. That is, instead of exact belief updating, a collection of sampled states is used to approximate the belief; instead of considering all possible observations, only a set of sampled observations are considered. Inspired by this, we take one step further and propose an online planning algorithm, Adaptive Online Packing-guided Search (AdaOPS), to better approximate beliefs with adaptive particle filter technique and balance estimation bias and variance by fusing similar observation branches. Theoretically, our algorithm is guaranteed to find an $\epsilon$-optimal policy with a high probability given enough planning time under some mild assumptions. We evaluate our algorithm on several tricky POMDP domains, and it outperforms the state-of-the-art in all of them.

Author Information

Chenyang Wu (Nanjing University)
Guoyu Yang (Nanjing University)
Zongzhang Zhang (Nanjing University)
Yang Yu (Nanjing University)
Dong Li (Huawei Noah’s Ark Lab)
Wulong Liu (Huawei Noah's Ark Lab)
Jianye Hao (Tianjin University)

More from the Same Authors