Timezone: »

What makes some POMDP problems easy to approximate?
David Hsu · Wee Sun Lee · Nan Rong

Tue Dec 04 05:20 PM -- 05:30 PM (PST) @ None

Point-based algorithms have been surprisingly successful in computing approximately optimal policies for partially observable Markov decision processes (POMDPs) in high dimensional belief spaces. In this work, we seek to understand the belief-space properties that allow some POMDP problems to be approximated efficiently and thus help to explain the point-based algorithms' success often observed in the experiments. We show that an approximately optimal POMDP solution can be computed in time polynomial in the covering number of a reachable belief space, the subset of the belief space reachable from a given belief point. We also show that under the weaker condition of having a small covering number for an optimal reachable space, the subset of the belief space reachable under an optimal policy, computing an approximately optimal solution is NP-hard. However, given a set of points from an optimal reachable space that covers it well, an approximate solution can be computed in polynomial time. The covering number highlights several interesting properties that help reduce the complexity of POMDP problems in practice, such as fully observed state variables, beliefs with sparse support, smooth beliefs, and circulant state-transition matrices.

Author Information

David Hsu (National University of Singapore)
Wee Sun Lee (National University of Singapore)
Nan Rong (Cornell University)

More from the Same Authors