Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Offline Reinforcement Learning Workshop: Offline RL as a "Launchpad"

Matrix Estimation for Offline Evaluation in Reinforcement Learning with Low-Rank Structure

Xumei Xi · Christina Yu · Yudong Chen


Abstract:

We consider offline Reinforcement Learning (RL), where the agent does not interact with the environment and must rely on offline data collected using a behavior policy. Previous works provide policy evaluation guarantees when the target policy to be evaluated is covered by the behavior policy, that is, state-action pairs visited by the target policy must also be visited by the behavior policy. We show that when the MDP has a latent low-rank structure, this coverage condition can be relaxed. Building on the connection to weighted matrix completion with non-uniform observations, we propose an offline policy evaluation algorithm that leverages the low-rank structure to estimate the values of uncovered state-action pairs. Our algorithm does not require a known feature representation, and our finite-sample error bound involves a novel discrepancy measure quantifying the discrepancy between the behavior and target policies in the spectral space. We provide concrete examples where our algorithm achieves accurate estimation while existing coverage conditions are not satisfied.

Chat is not available.