Timezone: »

 
Poster
Flamingo: a Visual Language Model for Few-Shot Learning
Jean-Baptiste Alayrac · Jeff Donahue · Pauline Luc · Antoine Miech · Iain Barr · Yana Hasson · Karel Lenc · Arthur Mensch · Katherine Millican · Malcolm Reynolds · Roman Ring · Eliza Rutherford · Serkan Cabi · Tengda Han · Zhitao Gong · Sina Samangooei · Marianne Monteiro · Jacob L Menick · Sebastian Borgeaud · Andy Brock · Aida Nematzadeh · Sahand Sharifzadeh · Mikołaj Bińkowski · Ricardo Barreira · Oriol Vinyals · Andrew Zisserman · Karén Simonyan

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #215

Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.

Author Information

Jean-Baptiste Alayrac (DeepMind)
Jeff Donahue (DeepMind)
Pauline Luc (Deepmind)
Antoine Miech (DeepMind)
Iain Barr (Deepmind)
Yana Hasson (DeepMind)
Karel Lenc (DeepMind)
Arthur Mensch (DeepMind)
Katherine Millican (DeepMind)
Malcolm Reynolds (DeepMind)
Roman Ring (DeepMind)
Eliza Rutherford (University of Oxford)
Serkan Cabi (DeepMind)
Tengda Han (University of Oxford)
Zhitao Gong (Auburn University)
Sina Samangooei (Five)
Marianne Monteiro (Universidade Federal de Campina Grande)
Jacob L Menick (Google DeepMind)
Sebastian Borgeaud (DeepMind)
Andy Brock (DeepMind)
Aida Nematzadeh (DeepMind)
Sahand Sharifzadeh (DeepMind)

Sahand is a research scientist at DeepMind. He did his Ph.D at LMU Munich under Prof. Volker Tresp focusing on developing a model of semantic memory and studying it's role in perception. In particular he studied the role of top-down processes and symbol grounding in computer vision.

Mikołaj Bińkowski (DeepMind S2)
Ricardo Barreira
Oriol Vinyals (DeepMind)

Oriol Vinyals is a Research Scientist at Google. He works in deep learning with the Google Brain team. Oriol holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego. He is a recipient of the 2011 Microsoft Research PhD Fellowship. He was an early adopter of the new deep learning wave at Berkeley, and in his thesis he focused on non-convex optimization and recurrent neural networks. At Google Brain he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.

Andrew Zisserman (DeepMind & University of Oxford)
Karén Simonyan (Inflection AI)

More from the Same Authors