Skip to yearly menu bar Skip to main content


Workshop

Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)

Ananth Balashankar · Saurabh Garg · Jindong Gu · Amrith Setlur · Yao Qin · Aditi Raghunathan · Ahmad Beirami

La Nouvelle Orleans Ballroom A+B (level 2)

Fri 15 Dec, 6:50 a.m. PST

Recent advances in the capabilities of large foundation models have been catalyzed by repurposing pretrained models to domain specific use cases through few-shot learning methods like prompt-tuning, in-context-learning; and zero-shot learning based on task descriptions. Given a few labeled examples that outline a new task [T5, GPT2, T0, DALL-E, CLIP], these large foundation models have demonstrably improved upon previous few-shot learning benchmarks [T-few, LAION]. We are closer than ever to learn from very few examples; and recent works [Frozen, Flamingo] have proposed methods to use large language and vision transformer models directly on these few examples, instead of human annotation to create large datasets for fine-tuning. The lessons learned from past-work in counterfactual reasoning, domain adaptation, meta-learning, continual learning, and adversarial training have to be revisited with a new lens towards improving robustness of few-shot learning methods or learning from no supervision (i.e., unlabeled data) that scale to multiple tasks in a safe and responsible manner. In addition to leveraging few-shot learning methods with labeled examples, there is also significant potential in harnessing the power of unlabeled data. When labeled and unlabeled data are from the same distribution, semi-supervised learning methods can be modified to now utilize large foundation models that can further improve boost performance over purely few-shot algorithms. Furthermore, similar ideas need to be explored for unsupervised domain adaptation, to improve robustness of fine-tuned methods to distribution shifts when the unlabeled data distribution is much broader than the distribution from which the labeled examples are collected.

Chat is not available.
Timezone: America/Los_Angeles

Schedule