Timezone: »

 
Spotlight
Learning multiple visual domains with residual adapters
Sylvestre-Alvise Rebuffi · Hakan Bilen · Andrea Vedaldi

Wed Dec 06 05:20 PM -- 05:25 PM (PST) @ Hall A

There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.

Author Information

Sylvestre-Alvise Rebuffi (University of Oxford)
Hakan Bilen (University of Edinburgh)
Andrea Vedaldi (Facebook AI Research and University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors