Tutorial
Imitation Learning and its Application to Natural Language Generation
Kyunghyun Cho · Hal Daumé III
West Exhibition Hall C, B3
Imitation learning is a learning paradigm that interpolates reinforcement learning on one extreme and supervised learning on the other extreme. In the specific case of generating structured outputs--as in natural language generation--imitation learning allows us to train generation policies with neither strong supervision on the detailed generation procedure (as would be required in supervised learning) nor with only a sparse reward signal (as in reinforcement learning). Imitation learning accomplishes this by exploiting the availability of potentially suboptimal "experts" that provide supervision along an execution trajectory of the policy. In the first part of this tutorial, we overview the paradigm of imitation learning and a suite of practical imitation learning algorithms. We then consider the specific application of natural language generation, framing this problem as a sequential decision making process. Under this view, we demonstrate how imitation learning could be successfully applied to natural language generation and open the door to a range of possible ways to learn policies that generate natural language sentences beyond naive left-to-right autoregressive generation.