Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Meta-Learning

Similarity of classification tasks

Cuong C Nguyen


Abstract:

Recent advances in meta-learning has led to remarkable performances on several benchmarks. Such success depends on not only the meta-learning algorithms, but also the similarity between training and testing tasks. However, such task similarity observation is often ignored when evaluating meta-learning methods, potentially biasing the classification results of the testing tasks. For instance, recent studies have found a large variance of classification results among testing tasks, suggesting that not all testing tasks are equally related to training tasks. This motivates the need to analyse task similarity to optimise and better understand the performance of meta-learning. Despite some successes in investigating task similarity, most studies in the literature rely on task-specific models or the need of external models pre-trained on some large data sets. We, therefore, propose a generative approach based on a variant of Latent Dirichlet Allocation to model classification tasks without depending on any particular models nor external pre-trained networks. The proposed modelling approach allows to represent any classification task in the latent \say{topic} space, so that we can analyse task similarity, or select the most similar tasks to facilitate the meta-learning of a novel task. We demonstrate that the proposed method can provide an insightful evaluation for meta-learning algorithms on two few-shot classification benchmarks. We also show that the proposed task-selection strategy for meta-learning produces more accurate classification results on a new testing task than a method that randomly selects the training tasks.

Chat is not available.