Timezone: »

 
Spotlight
Delta-encoder: an effective sample synthesis method for few-shot object recognition
Eli Schwartz · Leonid Karlinsky · Joseph Shtok · Sivan Harary · Mattias Marder · Abhishek Kumar · Rogerio Feris · Raja Giryes · Alex Bronstein

Wed Dec 05 06:50 AM -- 06:55 AM (PST) @ Room 220 E

Learning to classify new categories based on just one or a few examples is a long-standing challenge in modern computer vision. In this work, we propose a simple yet effective method for few-shot (and one-shot) object recognition. Our approach is based on a modified auto-encoder, denoted delta-encoder, that learns to synthesize new samples for an unseen category just by seeing few examples from it. The synthesized samples are then used to train a classifier. The proposed approach learns to both extract transferable intra-class deformations, or "deltas", between same-class pairs of training examples, and to apply those deltas to the few provided examples of a novel class (unseen during training) in order to efficiently synthesize samples from that new class. The proposed method improves the state-of-the-art of one-shot object-recognition and performs comparably in the few-shot case.

Author Information

Eli Schwartz (IBM-Research)
Leonid Karlinsky (IBM-Research)
Joseph Shtok (IBM-Reseach)
Sivan Harary (IBM-Research)
Mattias Marder (IBM-Research)
Abhishek Kumar (Google)
Rogerio Feris (IBM Research AI)
Raja Giryes (Tel Aviv University)
Alex Bronstein (Technion)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors