Timezone: »
We introduce the technique of adaptive discretization to design an efficient model-based episodic reinforcement learning algorithm in large (potentially continuous) state-action spaces. Our algorithm is based on optimistic one-step value iteration extended to maintain an adaptive discretization of the space. From a theoretical perspective we provide worst-case regret bounds for our algorithm which are competitive compared to the state-of-the-art model-based algorithms. Moreover, our bounds are obtained via a modular proof technique which can potentially extend to incorporate additional structure on the problem.
From an implementation standpoint, our algorithm has much lower storage and computational requirements due to maintaining a more efficient partition of the state and action spaces. We illustrate this via experiments on several canonical control problems, which shows that our algorithm empirically performs significantly better than fixed discretization in terms of both faster convergence and lower memory usage. Interestingly, we observe empirically that while fixed discretization model-based algorithms vastly outperform their model-free counterparts, the two achieve comparable performance with adaptive discretization.
Author Information
Sean Sinclair (Cornell University)
I am a second year PhD student in Operations Research and Information Engineering at Cornell University. I completed a BSc in Mathematics and Computer Science at McGill University where I worked on a project with Tony Humphries. Before returning to graduate school I spent two and a half years teaching mathematics, science, and English in a small community in rural Ghana with the Peace Corps, and after worked at National Life as a financial analyst. In general, I am interested in machine learning, statistics, and differential equations. My current work is on the theoretical underpinnings of reinforcement learning (RL) in metric spaces. These are natural models for systems involving real-time sequential decision making over continuous spaces. To facilitate RL's use on memory-constrained devices there are many challenges. The first is learning an "optimal" discretization - trading off memory requirements and algorithmic performance. The second is learning the metric when it is not clear what metric the problem is optimal to learn in. This balances the two fundamental requirements of implementable RL - approximating the optimal policy and statistical complexity for the number of samples required to learn the near optimal policy.
Tianyu Wang (Duke University)
Gauri Jain (Cornell University)
Siddhartha Banerjee (Cornell University)
Christina Yu (Cornell University)
More from the Same Authors
-
2021 : Pseudo-Competitive Games and Algorithmic Pricing »
Chamsi Hssaine · Vijay Kamble · Siddhartha Banerjee -
2021 : Pseudo-Competitive Games and Algorithmic Pricing »
Chamsi Hssaine · Vijay Kamble · Siddhartha Banerjee -
2022 : Matrix Estimation for Offline Evaluation in Reinforcement Learning with Low-Rank Structure »
Xumei Xi · Christina Yu · Yudong Chen -
2022 : A Causal Inference Framework for Network Interference with Panel Data »
Sarah Cen · Anish Agarwal · Christina Yu · Devavrat Shah -
2022 : Exploiting Neighborhood Interference with Low Order Interactions under Unit Randomized Design »
Mayleen Cortez · Matthew Eichhorn · Christina Yu -
2022 : Exploiting Neighborhood Interference with Low Order Interactions under Unit Randomized Design »
Mayleen Cortez · Matthew Eichhorn · Christina Yu -
2022 Poster: Staggered Rollout Designs Enable Causal Inference Under Interference Without Network Knowledge »
Mayleen Cortez · Matthew Eichhorn · Christina Yu -
2019 : Poster and Coffee Break 2 »
Karol Hausman · Kefan Dong · Ken Goldberg · Lihong Li · Lin Yang · Lingxiao Wang · Lior Shani · Liwei Wang · Loren Amdahl-Culleton · Lucas Cassano · Marc Dymetman · Marc Bellemare · Marcin Tomczak · Margarita Castro · Marius Kloft · Marius-Constantin Dinu · Markus Holzleitner · Martha White · Mengdi Wang · Michael Jordan · Mihailo Jovanovic · Ming Yu · Minshuo Chen · Moonkyung Ryu · Muhammad Zaheer · Naman Agarwal · Nan Jiang · Niao He · Nikolaus Yasui · Nikos Karampatziakis · Nino Vieillard · Ofir Nachum · Olivier Pietquin · Ozan Sener · Pan Xu · Parameswaran Kamalaruban · Paul Mineiro · Paul Rolland · Philip Amortila · Pierre-Luc Bacon · Prakash Panangaden · Qi Cai · Qiang Liu · Quanquan Gu · Raihan Seraj · Richard Sutton · Rick Valenzano · Robert Dadashi · Rodrigo Toro Icarte · Roshan Shariff · Roy Fox · Ruosong Wang · Saeed Ghadimi · Samuel Sokota · Sean Sinclair · Sepp Hochreiter · Sergey Levine · Sergio Valcarcel Macua · Sham Kakade · Shangtong Zhang · Sheila McIlraith · Shie Mannor · Shimon Whiteson · Shuai Li · Shuang Qiu · Wai Lok Li · Siddhartha Banerjee · Sitao Luan · Tamer Basar · Thinh Doan · Tianhe Yu · Tianyi Liu · Tom Zahavy · Toryn Klassen · Tuo Zhao · Vicenç Gómez · Vincent Liu · Volkan Cevher · Wesley Suttle · Xiao-Wen Chang · Xiaohan Wei · Xiaotong Liu · Xingguo Li · Xinyi Chen · Xingyou Song · Yao Liu · YiDing Jiang · Yihao Feng · Yilun Du · Yinlam Chow · Yinyu Ye · Yishay Mansour · · Yonathan Efroni · Yongxin Chen · Yuanhao Wang · Bo Dai · Chen-Yu Wei · Harsh Shrivastava · Hongyang Zhang · Qinqing Zheng · SIDDHARTHA SATPATHI · Xueqing Liu · Andreu Vall -
2019 : Poster and Coffee Break 1 »
Aaron Sidford · Aditya Mahajan · Alejandro Ribeiro · Alex Lewandowski · Ali H Sayed · Ambuj Tewari · Angelika Steger · Anima Anandkumar · Asier Mujika · Hilbert J Kappen · Bolei Zhou · Byron Boots · Chelsea Finn · Chen-Yu Wei · Chi Jin · Ching-An Cheng · Christina Yu · Clement Gehring · Craig Boutilier · Dahua Lin · Daniel McNamee · Daniel Russo · David Brandfonbrener · Denny Zhou · Devesh Jha · Diego Romeres · Doina Precup · Dominik Thalmeier · Eduard Gorbunov · Elad Hazan · Elena Smirnova · Elvis Dohmatob · Emma Brunskill · Enrique Munoz de Cote · Ethan Waldie · Florian Meier · Florian Schaefer · Ge Liu · Gergely Neu · Haim Kaplan · Hao Sun · Hengshuai Yao · Jalaj Bhandari · James A Preiss · Jayakumar Subramanian · Jiajin Li · Jieping Ye · Jimmy Smith · Joan Bas Serrano · Joan Bruna · John Langford · Jonathan Lee · Jose A. Arjona-Medina · Kaiqing Zhang · Karan Singh · Yuping Luo · Zafarali Ahmed · Zaiwei Chen · Zhaoran Wang · Zhizhong Li · Zhuoran Yang · Ziping Xu · Ziyang Tang · Yi Mao · David Brandfonbrener · Shirli Di-Castro · Riashat Islam · Zuyue Fu · Abhishek Naik · Saurabh Kumar · Benjamin Petit · Angeliki Kamoutsi · Simone Totaro · Arvind Raghunathan · Rui Wu · Donghwan Lee · Dongsheng Ding · Alec Koppel · Hao Sun · Christian Tjandraatmadja · Mahdi Karami · Jincheng Mei · Chenjun Xiao · Junfeng Wen · Zichen Zhang · Ross Goroshin · Mohammad Pezeshki · Jiaqi Zhai · Philip Amortila · Shuo Huang · Mariya Vasileva · El houcine Bergou · Adel Ahmadyan · Haoran Sun · Sheng Zhang · Lukas Gruber · Yuanhao Wang · Tetiana Parshakova -
2019 Poster: Nonparametric Contextual Bandits in Metric Spaces with Unknown Metric »
Nirandika Wanigasekara · Christina Yu -
2017 : Iterative Collaborative Filtering for Sparse Matrix Estimation »
Christina Lee -
2017 Workshop: Nearest Neighbors for Modern Applications with Massive Data: An Age-old Solution with New Challenges »
George H Chen · Devavrat Shah · Christina Lee -
2017 Poster: Thy Friend is My Friend: Iterative Collaborative Filtering for Sparse Matrix Estimation »
Christian Borgs · Jennifer Chayes · Christina Lee · Devavrat Shah -
2016 Poster: Blind Regression: Nonparametric Regression for Latent Variable Models via Collaborative Filtering »
Dogyoon Song · Christina Lee · Yihua Li · Devavrat Shah -
2015 Poster: Fast Bidirectional Probability Estimation in Markov Models »
Siddhartha Banerjee · Peter Lofgren -
2013 Poster: Computing the Stationary Distribution Locally »
Christina Lee · Asuman Ozdaglar · Devavrat Shah