Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Meta-Learning

MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation

ChienFu Lin


Abstract:

Pre-trained Transformer-based language models have been an enormous success in generating realistic natural language. However, how to adapt these models to specific domains effectively remains unsolved. On the other hand, Model-Agnostic Meta-Learning (MAML) has been an influential framework for few-shot learning, while how to determine the initial parameters of MAML is still not well-researched. In this paper, we fuse the information from the pre-training stage with meta-learning to learn how to adapt a pre-trained generative model to a new domain. In particular, we find that applying the pre-trained information as the initial state of meta-learning helps the model adapt to new tasks efficiently and is competitive with the state-of-the-art results over evaluation metrics on the Persona dataset. Besides, in few-shot experiments, we show that the proposed model converges significantly faster than naive transfer learning baselines.

Chat is not available.