Timezone: »

 
Spotlight
Conditional Meta-Learning of Linear Representations
Giulia Denevi · Massimiliano Pontil · Carlo Ciliberto

Wed Dec 07 09:00 AM -- 11:00 AM (PST) @

Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks. The effectiveness of these methods is often limited when the nuances of the tasks’ distribution cannot be captured by a single representation. In this work we overcome this issue by inferring a conditioning function, mapping the tasks’ side information (such as the tasks’ training dataset itself) into a representation tailored to the task at hand. We study environments in which our conditional strategy outperforms standard meta-learning, such as those in which tasks can be organized in separate clusters according to the representation they share. We then propose a meta-algorithm capable of leveraging this advantage in practice. In the unconditional setting, our method yields a new estimator enjoying faster learning rates and requiring less hyper-parameters to tune than current state-of-the-art methods. Our results are supported by preliminary experiments.

Author Information

Giulia Denevi (Leonardo Labs)
Massimiliano Pontil (IIT & UCL)
Carlo Ciliberto (University College London)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors