Timezone: »

Learning the Linear Quadratic Regulator from Nonlinear Observations
Zakaria Mhammedi · Dylan Foster · Max Simchowitz · Dipendra Misra · Wen Sun · Akshay Krishnamurthy · Alexander Rakhlin · John Langford

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #423

We introduce a new problem setting for continuous control called the LQR with Rich Observations, or RichLQR. In our setting, the environment is summarized by a low-dimensional continuous latent state with linear dynamics and quadratic costs, but the agent operates on high-dimensional, nonlinear observations such as images from a camera. To enable sample-efficient learning, we assume that the learner has access to a class of decoder functions (e.g., neural networks) that is flexible enough to capture the mapping from observations to latent states. We introduce a new algorithm, RichID, which learns a near-optimal policy for the RichLQR with sample complexity scaling only with the dimension of the latent state space and the capacity of the decoder function class. RichID is oracle-efficient and accesses the decoder class only through calls to a least-squares regression oracle. To our knowledge, our results constitute the first provable sample complexity guarantee for continuous control with an unknown nonlinearity in the system model.

Author Information

Zakaria Mhammedi (The Australian National University and Data61)
Dylan Foster (MIT)
Max Simchowitz (Berkeley)
Dipendra Misra (Microsoft Research, NY)
Wen Sun (Microsoft Research NYC)
Akshay Krishnamurthy (Microsoft)
Alexander Rakhlin (MIT)
John Langford (Microsoft Research New York)

More from the Same Authors