Bilevel Optimization to Learn Training Distributions for Language Modeling under Domain Shift
David Grangier · Pierre Ablin · Awni Hannun
Abstract
Language models trained on very large web corpora have become a central piece of modern language processing. In this paradigm, the large, heterogeneous training set rarely matches the distribution of the application domain. This work considers modifying the training distribution in the case where one can observe a small sample of data reflecting the test conditions. We propose an algorithm based on recent formulation of this problem as an online, bilevel optimization problem. We show that this approach compares favorably with alternative strategies from the domain adaptation literature.
Video
Chat is not available.
Successful Page Load