Timezone: »

 
Poster
Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning
Tengyang Xie · Nan Jiang · Huan Wang · Caiming Xiong · Yu Bai

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @
Recent theoretical work studies sample-efficient reinforcement learning (RL) extensively in two settings: learning interactively in the environment (online RL), or learning from an offline dataset (offline RL). However, existing algorithms and theories for learning near-optimal policies in these two settings are rather different and disconnected. Towards bridging this gap, this paper initiates the theoretical study of *policy finetuning*, that is, online RL where the learner has additional access to a "reference policy" $\mu$ close to the optimal policy $\pi_\star$ in a certain sense. We consider the policy finetuning problem in episodic Markov Decision Processes (MDPs) with $S$ states, $A$ actions, and horizon length $H$. We first design a sharp *offline reduction* algorithm---which simply executes $\mu$ and runs offline policy optimization on the collected dataset---that finds an $\varepsilon$ near-optimal policy within $\widetilde{O}(H^3SC^\star/\varepsilon^2)$ episodes, where $C^\star$ is the single-policy concentrability coefficient between $\mu$ and $\pi_\star$. This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL. We then establish an $\Omega(H^3S\min\{C^\star, A\}/\varepsilon^2)$ sample complexity lower bound for *any* policy finetuning algorithm, including those that can adaptively explore the environment. This implies that---perhaps surprisingly---the optimal policy finetuning algorithm is either offline reduction or a purely online RL algorithm that does not use $\mu$. Finally, we design a new hybrid offline/online algorithm for policy finetuning that achieves better sample complexity than both vanilla offline reduction and purely online RL algorithms, in a relaxed setting where $\mu$ only satisfies concentrability partially up to a certain time step. Overall, our results offer a quantitative understanding on the benefit of a good reference policy, and make a step towards bridging offline and online RL.

Author Information

Tengyang Xie (University of Illinois at Urbana-Champaign)
Nan Jiang (University of Illinois at Urbana-Champaign)
Huan Wang (Salesforce Research)

Huan Wang is an senior research scientist at Salesforce Research. His research interests include machine learning, big data analytics, computer vision and NLP. He used to be a research scientist at Microsoft AI Research, Yahoo’s New York Labs, and an adjunct professor at the engineering school of New York University. He graduated as a Ph.D in Computer Science at Yale University in 2013. Before that, he received an M.Phil. from The Chinese University of Hong Kong and a B.Eng. from Zhejiang University, both in information engineering.

Caiming Xiong (State Univerisity of New York at Buffalo)
Yu Bai (Salesforce Research)

More from the Same Authors