Poster
Provably and Practically Efficient Adversarial Imitation Learning with General Function Approximation
Tian Xu · Zhilong Zhang · Ruishuo Chen · Yihao Sun · Yang Yu
West Ballroom A-D #6612
As a prominent category of imitation learning methods, adversarial imitation learning (AIL) has garnered significant practical success aided by neural network approximation. However, existing theoretical studies on AIL are primarily limited to simplified scenarios such as tabular and linear function approximation and involve complex algorithmic designs that hinder practical implementation, highlighting a gap between theory and practice. In this paper, we explore the theoretical underpinnings of online AIL with general function approximation. We introduce a new method called optimization-based AIL (OPT-AIL), which centers on performing online optimization for rewards and optimism-regularized Bellman error minimization for Q-value functions. Theoretically, we prove that OPT-AIL achieves polynomial expert sample complexity and interaction complexity for learning near-expert policies. To our knowledge, OPT-AIL is the first provably efficient AIL method with general function approximation. Practically, OPT-AIL only requires the approximate optimization of two objectives, thereby facilitating practical implementation. Empirical studies demonstrate that OPT-AIL outperforms previous state-of-the-art deep AIL methods in several challenging tasks.
Live content is unavailable. Log in and register to view live content