Timezone: »
Poster
TaSIL: Taylor Series Imitation Learning
Daniel Pfrommer · Thomas Zhang · Stephen Tu · Nikolai Matni
We propose Taylor Series Imitation Learning (TaSIL), a simple augmentation to standard behavior cloning losses in the context of continuous control. TaSIL penalizes deviations in the higher-order Tayler series terms between the learned and expert policies. We show that experts satisfying a notion of incremental input-to-state stability are easy to learn, in the sense that that a small TaSIL-augmented imitation loss over expert trajectories guarantees a small imitation loss over trajectories generated by the learned policy. We provide sample-complexity bounds for TaSIL that scale as $\tilde{\mathcal{O}}(1/n)$ in the realizable setting, for $n$ the number of expert demonstrations. Finally, we demonstrate experimentally the relationship between the robustness of the expert policy and the order of Taylor expansion required in TaSIL, and compare standard Behavior Cloning, DART, and DAgger with TaSIL-loss-augmented variants. In all cases, we show significant improvement over baselines across a variety of MuJoCo tasks.
Author Information
Daniel Pfrommer (Massachusetts Institute of Technology)
Thomas Zhang (University of Pennsylvania)
Stephen Tu (Google)
Nikolai Matni (University of Pennsylvania)
More from the Same Authors
-
2022 : Visual Backtracking Teleoperation: A Data Collection Protocol for Offline Image-Based RL »
David Brandfonbrener · Stephen Tu · Avi Singh · Stefan Welker · Chad Boodoo · Nikolai Matni · Jacob Varley -
2022 : Visual Backtracking Teleoperation: A Data Collection Protocol for Offline Image-Based RL »
David Brandfonbrener · Stephen Tu · Avi Singh · Stefan Welker · Chad Boodoo · Nikolai Matni · Jake Varley -
2022 : Visual Backtracking Teleoperation: A Data Collection Protocol for Offline Image-Based RL »
David Brandfonbrener · Stephen Tu · Avi Singh · Stefan Welker · Chad Boodoo · Nikolai Matni · Jacob Varley -
2022 Poster: Learning with little mixing »
Ingvar Ziemann · Stephen Tu -
2019 Poster: Finite-time Analysis of Approximate Policy Iteration for the Linear Quadratic Regulator »
Karl Krauth · Stephen Tu · Benjamin Recht -
2019 Poster: Certainty Equivalence is Efficient for Linear Quadratic Control »
Horia Mania · Stephen Tu · Benjamin Recht -
2018 Poster: Regret Bounds for Robust Adaptive Control of the Linear Quadratic Regulator »
Sarah Dean · Horia Mania · Nikolai Matni · Benjamin Recht · Stephen Tu