Timezone: »

Adaptive Cross-Modal Few-shot Learning
Chen Xing · Negar Rostamzadeh · Boris Oreshkin · Pedro O. Pinheiro

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #18

Metric-based meta-learning techniques have successfully been applied to few-shot classification problems. In this paper, we propose to leverage cross-modal information to enhance metric-based few-shot learning methods. Visual and semantic feature spaces have different structures by definition. For certain concepts, visual features might be richer and more discriminative than text ones. While for others, the inverse might be true. Moreover, when the support from visual information is limited in image classification, semantic representations (learned from unsupervised text corpora) can provide strong prior knowledge and context to help learning. Based on these two intuitions, we propose a mechanism that can adaptively combine information from both modalities according to new image categories to be learned. Through a series of experiments, we show that by this adaptive combination of the two modalities, our model outperforms current uni-modality few-shot learning methods and modality-alignment methods by a large margin on all benchmarks and few-shot scenarios tested. Experiments also show that our model can effectively adjust its focus on the two modalities. The improvement in performance is particularly large when the number of shots is very small.

Author Information

Chen Xing (Montreal Institute of Learning Algorithms)
Negar Rostamzadeh (Elemenet AI)
Boris Oreshkin (Element AI)
Pedro O. Pinheiro (Element AI)

More from the Same Authors