Skip to yearly menu bar Skip to main content


Poster

Provably Efficient Reinforcement Learning for Infinite-Horizon Average-Reward Linear MDPs

Kihyuk Hong · Yufan Zhang · Ambuj Tewari

East Exhibit Hall A-C #4802
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: We resolve the open problem of designing a computationally efficient algorithm for infinite-horizon average-reward linear Markov Decision Processes (MDPs) with $\widetilde{\mathcal{O}}(\sqrt{T})$ regret. Previous approaches with $\widetilde{\mathcal{O}}(\sqrt{T})$ regret either suffer from computational inefficiency or require strong assumptions on dynamics, such as ergodicity. In this paper, we approximate the average-reward setting by the discounted setting and show that running an optimistic value iteration-based algorithm for learning the discounted setting achieves $\widetilde{\mathcal{O}}(\sqrt{T})$ regret when the discounting factor $\gamma$ is tuned appropriately. The challenge in the approximation approach is to get a regret bound with a sharp dependency on the effective horizon $1 / (1 - \gamma)$. We use a computationally efficient clipping operator that constrains the span of the optimistic state value function estimate to achieve a sharp regret bound in terms of the effective horizon, which leads to $\widetilde{\mathcal{O}}(\sqrt{T})$ regret.

Live content is unavailable. Log in and register to view live content