Poster
Corruption-Robust Offline Reinforcement Learning with General Function Approximation
Chenlu Ye · Rui Yang · Quanquan Gu · Tong Zhang
Great Hall & Hall B1+B2 (level 1) #2014
Abstract:
We investigate the problem of corruption robustness in offline reinforcement learning (RL) with general function approximation, where an adversary can corrupt each sample in the offline dataset, and the corruption level quantifies the cumulative corruption amount over episodes and steps. Our goal is to find a policy that is robust to such corruption and minimizes the suboptimality gap with respect to the optimal policy for the uncorrupted Markov decision processes (MDPs). Drawing inspiration from the uncertainty-weighting technique from the robust online RL setting \citep{he2022nearly,ye2022corruptionrobust}, we design a new uncertainty weight iteration procedure to efficiently compute on batched samples and propose a corruption-robust algorithm for offline RL. Notably, under the assumption of single policy coverage and the knowledge of , our proposed algorithm achieves a suboptimality bound that is worsened by an additive factor of due to the corruption. Here is the coverage coefficient that depends on the regularization parameter , the confidence set , and the dataset , and is a coefficient that depends on and the underlying data distribution . When specialized to linear MDPs, the corruption-dependent error term reduces to with being the dimension of the feature map, which matches the existing lower bound for corrupted linear MDPs. This suggests that our analysis is tight in terms of the corruption-dependent term.
Chat is not available.