This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

TaylorGAN: Neighbor-Augmented Policy Update Towards Sample-Efficient Natural Language Generation

Chun-Hsing Lin, Siang-Ruei Wu, Hung-yi Lee, Yun-Nung (Vivian) Chen

Poster Session 3 (more posters)
on 2020-12-08T21:00:00-08:00 - 2020-12-08T23:00:00-08:00
Abstract: Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a black box and ignore the gradient information. To improve the sample efficiency and reduce the variance of REINFORCE, we propose a novel approach, TaylorGAN, which augments the gradient estimation by off-policy update and the first-order Taylor expansion. This approach enables us to train NLG models from scratch with smaller batch size --- without maximum likelihood pre-training, and outperforms existing GAN-based methods on multiple metrics of quality and diversity.

Preview Video and Chat

To see video, interact with the author and ask questions please use registration and login.