Poster
in
Workshop: Compositional Learning: Perspectives, Methods, and Paths Forward
Can Models Learn Skill Composition from Examples?
Haoyu Zhao · Simran Kaur · Dingli Yu · Anirudh Goyal · Sanjeev Arora
Keywords: [ Large Language Model ] [ Skill Composition ]
Abstract:
As large language models (LLMs) become increasingly capable, their ability to exhibit *compositional generalization* of skills has garnered significant attention. Yu et al. recently introduced SKILL-MIX evaluation, where models are tasked with composing a short paragraph demonstrating the use of a specified -tuple of language skills. While small models struggled with even , larger models like GPT-4 showed reasonable performance with and .In this paper, we employ a setup akin to SKILL-MIX to evaluate the capacity of smaller models to learn compositional generalization from examples. Utilizing a diverse set of language skills---including rhetorical, literary, reasoning, and theory of mind---GPT-4 was used to generate text samples that exhibit random subsets of skills. Subsequent fine-tuning of 7B and 13B parameter models on these combined skill texts, for increasing values of , revealed the following findings: 1) Training on combinations of and skills results in noticeable improvements in the ability to compose texts with and skills, despite models never having seen such examples during training. 2) When skill categories are split into training and held-out groups, models significantly improve at composing texts with held-out skills despite having only seen training skills during fine-tuning, illustrating the efficacy of the training approach even with previously unseen skills.
Chat is not available.