Workshop
New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks
Urun Dogan · Marius Kloft · Tatiana Tommasi · Francesco Orabona · Massimiliano Pontil · Sinno Jialin Pan · Shai Ben-David · Arthur Gretton · Fei Sha · Marco Signoretto · Rajhans Samdani · Yun-Qian Miao · Mohammad Gheshlaghi azar · Ruth Urner · Christoph Lampert · Jonathan How
Harrah's Fallen+Marla
Tue 10 Dec, 7:30 a.m. PST
The main objective of the workshop is to document and discuss the recent rise of new research questions on the general problem of learning across domains and tasks. This includes the main topics of transfer [1,2,3] and multi-task learning [4], together with several related variants as domain adaptation [5,6] and dataset bias [7].
In the last years there has been an increasing boost of activity in these areas, many of them driven by practical applications, such as object categorization. Different solutions were studied for the considered topics, mainly separately and without a joint theoretical framework. On the other hand, most of the existing theoretical formulations model regimes that are rarely used in practice (e.g. adaptive methods that store all the source samples).
The workshop will focus on closing this gap by providing an opportunity for theoreticians and practitioners to get together in one place, to share and debate over current theories and empirical results. The goal is to promote a fruitful exchange of ideas and methods between the different communities, leading to a global advancement of the field.
Transfer Learning - Transfer Learning (TL) refers to the problem of retaining and applying the knowledge available for one or more source tasks, to efficiently develop an hypothesis for a new target task. Each task may contain the same (domain adaptation) or different label sets (across category transfer). Most of the effort has been devoted to binary classification, while most interesting practical transfer problems are intrinsically multi-class and the number of classes can often increase in time. Hence, it is natural to ask:
- How to formalize knowledge transfer across multi-class tasks and provide theoretical guarantees on this setting?
- Moreover, can interclass transfer and incremental class learning be properly integrated?
- Can learning guarantees be provided when the adaptation relies only on pre-trained source hypotheses without explicit access to the source samples, as it is often the case in real world scenarios?
Multi-task Learning - Learning over multiple related tasks can outperform learning each task in isolation. This is the principal assertion of Multi-task learning (MTL) and implies that the learning process may benefit from common information shared across the tasks. In the simplest case, transfer process is symmetric and all the tasks are considered as equally related and appropriate for joint training.
- What happens when this condition does not hold, e.g., how to avoid negative transfer?
- Moreover, can RHKS embeddings be adequately integrated into the learning process to estimate and compare the distributions underlying the multiple tasks?
- How may embedding probability distributions help learning from data clouds?
- Recent methods, like deep learning or multiple kernel learning, can help to get a step closer towards the complete automatization of multi-task learning?
- How can notions from reinforcement learning such as source task selection be connected to notions from convex multi-task learning such as the task similarity matrix?
References
[1] I. Kuzborskij and F. Orabona. Stability and Hypothesis Transfer Learning. ICML 2013
[2] T. Tommasi, F. Orabona, B. Caputo. Safety in Numbers: Learning Categories from Few Examples with Multi Model Knowledge Transfer. CVPR 2010.
[3] U. Rückert, M. Kloft. Transfer Learning with Adaptive Regularizers. ECML 2011.
[4] A. Maurer, M. Pontil, B. Romera-Paredes. Sparse coding for multitask and transfer learning. ICML 2013.
[5] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, J. Wortman Vaughan. A theory of learning from different domains. Machine Learning 2010.
[6] K. Saenko, B. Kulis, M. Fritz, T. Darrell. Adapting Visual Category Models to New Domains. ECCV 2010.
[7] A. Torralba, A. Efros. Unbiased Look at Dataset Bias. CVPR 2011.
Live content is unavailable. Log in and register to view live content