Timezone: »

Representation and Learning Methods for Complex Outputs
Richard Zemel · Dale Schuurmans · Kilian Q Weinberger · Yuhong Guo · Jia Deng · Francesco Dinuzzo · Hal Daumé III · Honglak Lee · Noah A Smith · Richard Sutton · Jiaqian YU · Vitaly Kuznetsov · Luke Vilnis · Hanchen Xiong · Calvin Murdock · Thomas Unterthiner · Jean-Francis Roy · Martin Renqiang Min · Hichem SAHBI · Fabio Massimo Zanzotto

Sat Dec 05:30 AM -- 03:30 PM PST @ Level 5; room 512 b, f
Event URL: https://sites.google.com/site/complexoutputs2014/ »

Learning problems that involve complex outputs are becoming increasingly prevalent in machine learning research. For example, work on image and document tagging now considers thousands of labels chosen from an open vocabulary, with only partially labeled instances available for training. Given limited labeled data, these settings also create zero-shot learning problems with respect to omitted tags, leading to the challenge of inducing semantic label representations. Furthermore, prediction targets are often abstractions that are difficult to predict from raw input data, but can be better predicted from learned latent representations. Finally, when labels exhibit complex inter-relationships it is imperative to capture latent label relatedness to improve generalization.

This workshop will bring together separate communities that have been working on novel representation and learning methods for problems with complex outputs. Although representation learning has already achieved state of the art results in standard settings, recent research has begun to explore the use of learned representations in more complex scenarios, such as structured output prediction, multiple modality co-embedding, multi-label prediction, and zero shot learning. Unfortunately, these emerging research topics have been conducted in separate sub-areas, without proper connections drawn to similar ideas in other areas, hence general methods and understanding have not yet emerged from the disconnected pursuits. The aim of this workshop is to identify fundamental strategies, highlight differences, and identify the prospects for developing a set of systematic theory and methods for learning problems with complex outputs. The target communities include researchers working on image tagging, document categorization, natural language processing, large vocabulary speech recognition, deep learning, latent variable modeling, and large scale multi-label learning.

Relevant topics include:
- Multi-label learning with large and/or incomplete output spaces
- Zero-shot learning
- Label embedding and Co-embedding
- Learning output kernels
- Output structure learning

Author Information

Richard Zemel (Vector Institute/University of Toronto)
Dale Schuurmans (University of Alberta & Google Brain)
Kilian Q Weinberger (Washington University in St. Louis)
Yuhong Guo (Carleton University)
Jia Deng (University of Michigan)
Francesco Dinuzzo (Expedia Group)
Hal Daumé III (Univ of Maryland / Microsoft Research)
Honglak Lee (Google / U. Michigan)
Noah A Smith (Carnegie Mellon University)
Rich Sutton (DeepMind & Univ of Alberta)

Richard S. Sutton is a professor and iCORE chair in the department of computing science at the University of Alberta. He is a fellow of the Association for the Advancement of Artificial Intelligence and co-author of the textbook "Reinforcement Learning: An Introduction" from MIT Press. Before joining the University of Alberta in 2003, he worked in industry at AT&T and GTE Labs, and in academia at the University of Massachusetts. He received a PhD in computer science from the University of Massachusetts in 1984 and a BA in psychology from Stanford University in 1978. Rich's research interests center on the learning problems facing a decision-maker interacting with its environment, which he sees as central to artificial intelligence. He is also interested in animal learning psychology, in connectionist networks, and generally in systems that continually improve their representations and models of the world.

Jiaqian YU (Ecole Centrale Paris)
Vitaly Kuznetsov (HRT)

Vitaly Kuznetsov is a research scientist at Google. Prior to joining Google Research, Vitaly received his Ph.D. in mathematics from the Courant Institute of Mathematical Sciences at New York University. Vitaly has contributed to a number of different areas in machine learning, in particular the development of the theory and algorithms for forecasting non-stationary time series. At Google, his work is focused on the design and implementation of large-scale machine learning tools and algorithms for time series modeling, forecasting and anomaly detection. His current research interests include all aspects of applied and theoretical time series analysis, in particular, in non-stationary environments.

Luke Vilnis (University of Massachusetts, Amherst)
Hanchen Xiong (University of Innsbruck)
Calvin Murdock (Carnegie Mellon University)
Tom Unterthiner (LIT AI Lab / University Linz)
Jean-Francis Roy (Université Laval)
Martin Renqiang Min (NEC Labs America)
Hichem SAHBI (CNRS, TELECOM ParisTech)
Fabio Massimo Zanzotto (University of Rome "Tor Vergata")

More from the Same Authors