Skip to yearly menu bar Skip to main content


Invited Talk

Probabilistic Machine Learning: Foundations and Frontiers

Dec 8, 6:00 AM - 6:50 AM Room 210 AB
Probabilistic modelling provides a mathematical framework for understanding what learning is, and has therefore emerged as one of the principal approaches for designing computer algorithms that learn from data acquired through experience. I will review the foundations of this field, from basics to Bayesian nonparametric models and scalable inference. I will then highlight some current areas of research at the frontiers of machine learning, leading up to topics such as probabilistic programming, Bayesian optimisation, the rational allocation of computational resources, and the Automatic Statistician.
Speaker
Zoubin Ghahramani

Zoubin Ghahramani

Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, where he leads the Machine Learning Group. He studied computer science and cognitive science at the University of Pennsylvania, obtained his PhD from MIT in 1995, and was a postdoctoral fellow at the University of Toronto. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research interests include statistical machine learning, Bayesian nonparametrics, scalable inference, probabilistic programming, and building an automatic statistician. He has held a number of leadership roles as programme and general chair of the leading international conferences in machine learning including: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014). In 2015 he was elected a Fellow of the Royal Society.
View full details
Invited Talk

Incremental Methods for Additive Cost Convex Optimization

Dec 8, 11:00 AM - 11:50 AM Level 2 room 210 AB

Motivated by machine learning problems over large data sets and distributed optimization over networks, we consider the problem of minimizing the sum of a large number of convex component functions. We study incremental gradient methods for solving such problems, which use information about a single component function at each iteration. We provide new convergence rate results under some assumptions. We also consider incremental aggregated gradient methods, which compute a single component function gradient at each iteration while using outdated gradients of all component functions to approximate the entire global cost function, and provide new linear rate results.

This is joint work with Mert Gurbuzbalaban and Pablo Parrilo.

Speaker
Asuman Ozdaglar

Asuman Ozdaglar

Asu Ozdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively. She is currently a professor in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology. She is also the director of the Laboratory for Information and Decision Systems. Her research expertise includes optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, distributed optimization and control, and network analysis with special emphasis on contagious processes, systemic risk and dynamic control. Professor Ozdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council, the Class of 1943 Career Development Chair, the inaugural Steven and Renee Innovation Fellowship, and the 2014 Spira teaching award. She served on the Board of Governors of the Control System Society in 2010 and was an associate editor for IEEE Transactions on Automatic Control. She is currently the area co-editor for a new area for the journal Operations Research, entitled "Games, Information and Networks. She is the co-author of the book entitled “Convex Analysis and Optimization” (Athena Scientific, 2003).
View full details
Invited Talk

Post-selection Inference for Forward Stepwise Regression, Lasso and other Adaptive Statistical procedures

Dec 9, 6:00 AM - 6:50 AM Level 2 room 210 AB
Talk Slides »

In this talk I will present new inference tools for adaptive statistical procedures. These tools provide p-values and confidence intervals that have correct "post-selection" properties: they account for the selection that has already been carried out on the same data. I discuss application of these ideas to a wide variety of problems including Forward Stepwise Regression, Lasso, PCA, and graphical models. I will also discuss computational issues and software for implementation of these ideas.

This talk represents work (some joint) with many people including Jonathan Taylor, Richard Lockhart, Ryan Tibshirani, Will Fithian, Jason Lee, Dennis Sun, Yuekai Sun and Yunjin Choi.

Speaker
Robert Tibshirani

Robert Tibshirani

Robert Tibshirani is a Professor in the Departments of Statistics and Health Research and Policy at Stanford University. He received a B.Math. from the University of Waterloo, an M.Sc. from the University of Toronto and a Ph.D. from Stanford University. He was a Professor at the University of Toronto from 1985 to 1998. In his work he has made important contributions to the analysis of complex datasets, most recently in genomics and proteomics. Some of his most well-known contributions are the lasso, which uses L1 penalization in regression and related problems, generalized additive models and Significance Analysis of Microarrays (SAM). He also co-authored four books "Generalized Additive Models", "An Introduction to the Bootstrap", and "The Elements of Statistical Learning" (now in its second edition) and "Statistical learning with Sparsity".
View full details
Invited Talk

Diagnosis and Therapy of Psychiatric Disorders Based on Brain Dynamics

Dec 9, 11:00 AM - 11:50 AM Level 2 room 210 AB

Arthur Winfree was one of the pioneers who postulated that several diseases are actually disorders of dynamics of biological systems. Following this path, many now believe that psychiatric diseases are disorders of brain dynamics. Combination of noninvasive brain measurement techniques, brain decoding and neurofeedback, and machine learning algorithms opened up a revolutionary pathway to quantitative diagnosis and therapy of neuropsychiatric disorders.

Speaker
Mitsuo Kawato

Mitsuo Kawato

Mitsuo Kawato received the B.S. degree in physics from Tokyo University in 1976 and the M.E. and Ph.D. degrees in biophysical engineering from Osaka University in 1978 and 1981, respectively. From 1981 to 1988, he was a faculty member and lecturer at Osaka University. From 1988, he was a senior researcher and then a supervisor in ATR Auditory and Visual Perception Research Laboratories. In 1992, he became department head of Department 3, ATR Human Information Processing Research Laboratories. From 2003, he has been Director of ATR Computational Neuroscience Laboratories. For the last 30 years, he has been working in computational neuroscience.
View full details
Invited Talk

Computational Principles for Deep Neuronal Architectures

Dec 9, 1:30 PM - 2:20 PM Level 2 room 210 AB
Recent progress in machine applications of deep neural networks have highlighted the need for a theoretical understanding of the capacity and limitations of these architectures. I will review our understanding of sensory processing in such architectures in the context of the hierarchies of processing stages observed in many brain systems. I will also address the possible roles of recurrent and top - down connections, which are prominent features of brain information processing circuits.
Speaker
Haim Sompolinsky

Haim Sompolinsky

View full details
Invited Talk

Learning with Intelligent Teacher: Similarity Control and Knowledge Transfer

Dec 10, 6:00 AM - 6:50 AM Level 2 room 210 AB

In the talk, I will introduce a model of learning with Intelligent Teacher. In this model, Intelligent Teacher supplies (some) training examples $\mathscr{(x_i, y_i), i=1, \dots , l, x_i \in X,y_i \in \{-1,1\}}$ with additional (privileged) information) $\mathscr{x_i^* \in X^*}$ forming training triplets $\mathscr (x_i,x_i^*, y_i), i, \dots , l$. Privileged information is available only for training examples and $not\, available\, for\, text\, examples$. Using privileged information it is required to find a better training processes (that use less examples or more accurate with the same number of examples) than the classical ones. In this lecture, I will present two additional mechanisms that exist in learning with Intelligent Teacher * The mechanism to control Student's concept of examples similarity and * The mechanism to transfer knowledge that can be obtained in space of privileged information to the desired space of decision rules. Privileged information exists for many inference problem and Student-Teacher interaction can be considered as the basic element of intelligent behavior.

Speaker
Vladimir Vapnik

Vladimir Vapnik

<p>Vladimir Vapnik is leading scientist in the field of Machine Learning. He laid foundations of the general learning theory, called Vapnik-Chervonenkis (VC) theory, introduced one of the most effective machine learning methods, called Support Vector Machine (SVM) method, and introduced a new learning paradigm, called Learning with Intelligent Teacher.</p> <p>Vladimir is member of National Academy of Engineering (2006), winer of many international Awards which include: the Humboldt Award (2003) the Gabor Award (2005), the Paris Kanellakis Award (2008), the Neural Networks Pioneer Award (2010), the Frank Rosenblatt Award (2012), the Benjamin Franklin Medal in Computer and Cognitive Science (2012), the C&C Prize (Japan), and Kamp de Friet Award (2014).</p>
View full details