Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Meta-Learning

Uniform Priors for Meta-Learning

Samarth Sinha


Abstract:

Deep Neural Networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenging problem. However, the ability to perform few-shot adaptation to novel tasks is important for the scalability and deployment of machine learn-ing models. It is therefore crucial to understand what makes for good, transferable features in deep networks that best allow for such adaptation. In this paper, we shed light on this by showing that features that are most transferable have high uniformity in the embedding space and propose a uniformity regularization scheme that encourages better transfer and feature reuse for few-shot learning. We evaluate our regularization on few-shot Meta-Learning benchmarks and show that uniformity regularization consistently offers benefits over baseline methods while also being able to achieve state-of-the-art on the Meta-Dataset

Chat is not available.