Timezone: »

 
Contextual Squeeze-and-Excitation
Massimiliano Patacchiola · John Bronskill · Aliaksandra Shysheya · Katja Hofmann · Sebastian Nowozin · Richard Turner
Event URL: https://openreview.net/forum?id=0KTEHivEy1 »

Several applications require effective knowledge transfer across tasks in the low-data regime. For instance in personalization a pretrained system is adapted by learning on small amounts of labeled data belonging to a specific user (context). This setting requires high accuracy under low computational complexity, meaning low memory footprint in terms of parameters storage and adaptation cost. Meta-learning methods based on Feature-wise Linear Modulation generators (FiLM) satisfy these constraints as they can adapt a backbone without expensive fine-tuning. However, there has been limited research on viable alternatives to FiLM generators. In this paper we focus on this area of research and propose a new adaptive block called Contextual Squeeze-and-Excitation (CaSE). CaSE is more efficient than FiLM generators for a variety of reasons: it does not require a separate set encoder, has fewer learnable parameters, and only uses a scale vector (no shift) to modulate activations. We empirically show that CaSE is able to outperform FiLM generators in terms of parameter efficiency (a 75% reduction in the number of adaptation parameters) and classification accuracy (a 1.5% average improvement on the 26 datasets of the VTAB+MD benchmark).

Author Information

Massimiliano Patacchiola (University of Cambridge)
Massimiliano Patacchiola

Massimiliano (Max) Patacchiola is a postdoctoral researcher at the University of Cambridge (Machine Learning Group) working under the supervision of prof. Richard Turner in collaboration with Microsoft Research. Before he was a postdoctoral researcher at the University of Edinburgh and an inter at Snapchat. Max is interested in meta-learning, few-shot learning, and reinforcement learning.

John Bronskill (University of Cambridge)
Aliaksandra Shysheya (University of Cambridge)
Katja Hofmann (Microsoft Research)

Dr. Katja Hofmann is a Principal Researcher at the [Game Intelligence](http://aka.ms/gameintelligence/) group at [Microsoft Research Cambridge, UK](https://www.microsoft.com/en-us/research/lab/microsoft-research-cambridge/). There, she leads a research team that focuses on reinforcement learning with applications in modern video games. She and her team strongly believe that modern video games will drive a transformation of how we interact with AI technology. One of the projects developed by her team is [Project Malmo](https://www.microsoft.com/en-us/research/project/project-malmo/), which uses the popular game Minecraft as an experimentation platform for developing intelligent technology. Katja's long-term goal is to develop AI systems that learn to collaborate with people, to empower their users and help solve complex real-world problems. Before joining Microsoft Research, Katja completed her PhD in Computer Science as part of the [ILPS](https://ilps.science.uva.nl/) group at the [University of Amsterdam](https://www.uva.nl/en). She worked with Maarten de Rijke and Shimon Whiteson on interactive machine learning algorithms for search engines.

Sebastian Nowozin (DeepMind)
Richard Turner (University of Cambridge)

More from the Same Authors