Timezone: »

Learning feed-forward one-shot learners
Luca Bertinetto · João Henriques · Jack Valmadre · Philip Torr · Andrea Vedaldi

Wed Dec 07 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #62 #None

One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.

Author Information

Luca Bertinetto (University of Oxford)

Luca Bertinetto is a PhD candidate in the Torr Vision Group at the University of Oxford. The main focus of his doctorate is the problem of agnostic object tracking, which he likes to tackle using simple and effective approaches. Before getting lost among the spires of Oxford, he obtained a joint MSc in Computer Engineering between the Polytechnic University of Turin and Telecom Paris Tech. He has published at CVPR and NIPS and reviewed for PAMI.

João Henriques (University of Oxford)
Jack Valmadre (University of Oxford)
Philip Torr (Oxford University)
Andrea Vedaldi (Facebook AI Research and University of Oxford)

More from the Same Authors