`

Timezone: »

 
Poster
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation
Jixuan Wang · Kuan-Chieh Wang · Frank Rudzicz · Michael Brudno

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ Virtual #None

Large pretrained language models (LMs) like BERT have improved performance in many disparate natural language processing (NLP) tasks. However, fine tuning such models requires a large number of training examples for each target task. Simultaneously, many realistic NLP problems are "few shot", without a sufficiently large training set. In this work, we propose a novel conditional neural process-based approach for few-shot text classification that learns to transfer from other diverse tasks with rich annotation. Our key idea is to represent each task using gradient information from a base model and to train an adaptation network that modulates a text classifier conditioned on the task representation. While previous task-aware few-shot learners represent tasks by input encoding, our novel task representation is more powerful, as the gradient captures input-output relationships of a task. Experimental results show that our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches on a collection of diverse few-shot tasks. We further conducted analysis and ablations to justify our design choices.

Author Information

Jixuan Wang (University of Toronto)
Kuan-Chieh Wang (University of Toronto)
Frank Rudzicz (University of Toronto)
Michael Brudno (University of Toronto)

More from the Same Authors