Timezone: »

New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks
Urun Dogan · Marius Kloft · Tatiana Tommasi · Francesco Orabona · Massimiliano Pontil · Sinno Jialin Pan · Shai Ben-David · Arthur Gretton · Fei Sha · Marco Signoretto · Rajhans Samdani · Yun-Qian Miao · Mohammad Gheshlaghi azar · Ruth Urner · Christoph Lampert · Jonathan How

Tue Dec 10 07:30 AM -- 06:30 PM (PST) @ Harrah's Fallen+Marla
Event URL: https://sites.google.com/site/learningacross/ »

The main objective of the workshop is to document and discuss the recent rise of new research questions on the general problem of learning across domains and tasks. This includes the main topics of transfer [1,2,3] and multi-task learning [4], together with several related variants as domain adaptation [5,6] and dataset bias [7].

In the last years there has been an increasing boost of activity in these areas, many of them driven by practical applications, such as object categorization. Different solutions were studied for the considered topics, mainly separately and without a joint theoretical framework. On the other hand, most of the existing theoretical formulations model regimes that are rarely used in practice (e.g. adaptive methods that store all the source samples).

The workshop will focus on closing this gap by providing an opportunity for theoreticians and practitioners to get together in one place, to share and debate over current theories and empirical results. The goal is to promote a fruitful exchange of ideas and methods between the different communities, leading to a global advancement of the field.

Transfer Learning - Transfer Learning (TL) refers to the problem of retaining and applying the knowledge available for one or more source tasks, to efficiently develop an hypothesis for a new target task. Each task may contain the same (domain adaptation) or different label sets (across category transfer). Most of the effort has been devoted to binary classification, while most interesting practical transfer problems are intrinsically multi-class and the number of classes can often increase in time. Hence, it is natural to ask:
- How to formalize knowledge transfer across multi-class tasks and provide theoretical guarantees on this setting?
- Moreover, can interclass transfer and incremental class learning be properly integrated?
- Can learning guarantees be provided when the adaptation relies only on pre-trained source hypotheses without explicit access to the source samples, as it is often the case in real world scenarios?

Multi-task Learning - Learning over multiple related tasks can outperform learning each task in isolation. This is the principal assertion of Multi-task learning (MTL) and implies that the learning process may benefit from common information shared across the tasks. In the simplest case, transfer process is symmetric and all the tasks are considered as equally related and appropriate for joint training.
- What happens when this condition does not hold, e.g., how to avoid negative transfer?
- Moreover, can RHKS embeddings be adequately integrated into the learning process to estimate and compare the distributions underlying the multiple tasks?
- How may embedding probability distributions help learning from data clouds?
- Recent methods, like deep learning or multiple kernel learning, can help to get a step closer towards the complete automatization of multi-task learning?
- How can notions from reinforcement learning such as source task selection be connected to notions from convex multi-task learning such as the task similarity matrix?


[1] I. Kuzborskij and F. Orabona. Stability and Hypothesis Transfer Learning. ICML 2013
[2] T. Tommasi, F. Orabona, B. Caputo. Safety in Numbers: Learning Categories from Few Examples with Multi Model Knowledge Transfer. CVPR 2010.
[3] U. Rückert, M. Kloft. Transfer Learning with Adaptive Regularizers. ECML 2011.
[4] A. Maurer, M. Pontil, B. Romera-Paredes. Sparse coding for multitask and transfer learning. ICML 2013.
[5] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, J. Wortman Vaughan. A theory of learning from different domains. Machine Learning 2010.
[6] K. Saenko, B. Kulis, M. Fritz, T. Darrell. Adapting Visual Category Models to New Domains. ECCV 2010.
[7] A. Torralba, A. Efros. Unbiased Look at Dataset Bias. CVPR 2011.

Author Information

Urun Dogan (Microsoft)
Marius Kloft (TU Kaiserslautern)
Tatiana Tommasi (KUL)
Francesco Orabona (Stony Brook University)
Massimiliano Pontil (IIT & UCL)
Sinno Jialin Pan (The Chinese University of Hong Kong)
Shai Ben-David (University of Waterloo)
Arthur Gretton (Google Deepmind / UCL)

Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University. Arthur's recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (high/infinite dimensional exponential family models), nonparametric hypothesis testing, and kernel methods. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, an Area Chair for NeurIPS in 2008 and 2009, a Senior Area Chair for NeurIPS in 2018, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013. Arthur was program chair for AISTATS in 2016 (with Christian Robert), tutorials chair for ICML 2018 (with Ruslan Salakhutdinov), workshops chair for ICML 2019 (with Honglak Lee), program chair for the Dali workshop in 2019 (with Krikamol Muandet and Shakir Mohammed), and co-organsier of the Machine Learning Summer School 2019 in London (with Marc Deisenroth).

Fei Sha (University of Southern California (USC))
Marco Signoretto (KULeuven)
Rajhans Samdani (Google Inc.)
Yun-Qian Miao (University of Waterloo)
Mohammad Gheshlaghi azar (CMU)
Ruth Urner (York University)
Christoph Lampert (Institute of Science and Technology Austria (ISTA))
Christoph Lampert

Christoph Lampert received the PhD degree in mathematics from the University of Bonn in 2003. In 2010 he joined the Institute of Science and Technology Austria (ISTA) first as an Assistant Professor and since 2015 as a Professor. There, he leads the research group for Machine Learning and Computer Vision, and since 2019 he is also the head of ISTA's ELLIS unit.

Jonathan How (MIT)

More from the Same Authors