Timezone: »

Can you label less by using out-of-domain data? Active & Transfer Learning with Few-shot Instructions
Rafal Kocielnik · Sara Kangaslahti · Shrimai Prabhumoye · Meena Hari · Michael Alvarez · Anima Anandkumar
Event URL: https://openreview.net/forum?id=WGFDlUVOe2 »

Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from overfitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). We demonstrate that our strategy can yield positive transfer achieving a mean AUC gain of 13.20% compared to no transfer with a large 22b parameter PLM. We further show that the impact of transfer from pre-labeled source-domain task decreases with more annotation effort on target-domain task (26% drop in gain between 100 and 2000 annotated examples). Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.

Author Information

Rafal Kocielnik (California Institute of Technology)
Sara Kangaslahti (California Institute of Technology)
Shrimai Prabhumoye (NVIDIA)
Meena Hari (California Institute of Technology)
Michael Alvarez
Anima Anandkumar (NVIDIA / Caltech)

More from the Same Authors