Skip to yearly menu bar Skip to main content



Invited Talks
Lise Getoor

[ Hall A ]

Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. From information integration to scientific discovery to computational social science, we need machine learning methods that are able to exploit both the inherent uncertainty and the innate structure in a domain. Statistical relational learning (SRL) is a subfield that builds on principles from probability theory and statistics to address uncertainty while incorporating tools from knowledge representation and logic to represent structure. In this talk, I will give a brief introduction to SRL, present templates for common structured prediction problems, and describe modeling approaches that mix logic, probabilistic inference and latent variables. I’ll overview our recent work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains. I’ll close by highlighting emerging opportunities (and challenges!!) in realizing the effectiveness of data and structure for knowledge discovery.

Yael Niv

[ Hall A ]

On the face of it, most real-world world tasks are hopelessly complex from the point of view of reinforcement learning mechanisms. In particular, due to the ”curse of dimensionality”, even the simple task of crossing the street should, in principle, take thousands of trials to learn to master. But we are better than that.. How does our brain do it? In this talk, I will argue that the hardest part of learning is not assigning values or learning policies, but rather deciding on the boundaries of similarity between experiences, which define the ”states” that we learn about. I will show behavioral evidence that humans and animals are constantly engaged in this representation learning process, and suggest that in a not too far future, we may be able to read out these representations from the brain, and therefore find out how the brain has mastered this complex problem. I will formalize the problem of learning a state representation in terms of Bayesian inference with infinite capacity models, and suggest that an understanding of the computational problem of representation learning can lead to insights into the machine learning problem of transfer learning, and psychological/neuroscientific questions about the interplay between memory and learning.

Pieter Abbeel

[ Hall A ]

Kate Crawford

[ Hall A ]

Computer scientists are increasingly concerned about the many ways that machine learning can reproduce and reinforce forms of bias. When ML systems are incorporated into core social institutions, like healthcare, criminal justice and education, issues of bias and discrimination can be extremely serious. But what can be done about it? Part of the trouble with bias in machine learning in high-stakes decision making is that it can be the result of one or many factors: the training data, the model, the system goals, and whether the system works less well for some populations, among several others. Given the difficulty of understanding how a machine learning system produced a particular result, bias is often discovered after a system has been producing unfair results in the wild. But there is another problem as well: the definition of bias changes significantly depending on your discipline, and there are exciting approaches from other fields that have not yet been included by computer science. This talk will look at the recent literature on bias in machine learning, consider how we can incorporate approaches from the social sciences, and offer new strategies to address bias.

Brendan J Frey

[ Hall A ]

We have figured out how to write to the genome using DNA editing, but we don't know what the outcomes of genetic modifications will be. This is called the "genotype-phenotype gap". To close the gap, we need to reverse-engineer the genetic code, which is very hard because biology is too complicated and noisy for human interpretation. Machine learning and AI are needed. The data? Six billion letters per genome, hundreds of thousands of types of biomolecules, hundreds of cell types, over seven billion people on the planet. A new generation of "Bio-AI" researchers are poised to crack the problem, but we face extraordinary challenges. I'll discuss these challenges, focusing on which branches of AI and machine learning will have the most impact and why.

(Posner Lecture)
John Platt

[ Hall A ]

My goal is to let everyone on Earth be able to use the same amount of energy per year as the average U.S. citizen does today. To reach this goal by 2100 will require 0.2 Yottajoules: 0.2 x 10^24 Joules of energy, which is an astounding amount of energy.

How can human civilization obtain this much energy without flooding the atmosphere with carbon dioxide? To answer this question, I'll first dive into the economics of electricity, in order to understand the limits of current zero-carbon technologies. These limits cause us to investigate zero-carbon technologies that are still being developed, such as fusion energy. For fusion, I'll show why it's been a tough problem for almost 70 years, and why there may be a solution in the near future. I'll also explain how we've been using machine learning and optimization to accelerate fusion research.

(Breiman Lecture)
Yee Whye Teh

[ Hall A ]

Probabilistic and Bayesian reasoning is one of the principle theoretical pillars to our understanding of machine learning. Over the last two decades, it has inspired a whole range of successful machine learning methods and influenced the thinking of many researchers in the community. On the other hand, in the last few years the rise of deep learning has completely transformed the field and led to a string of phenomenal, era-defining, successes. In this talk I will explore the interface between these two perspectives on machine learning, and through a number of projects I have been involved in, explore questions like: How can probabilistic thinking help us understand deep learning methods or lead us to interesting new methods? Conversely, how can deep learning technologies help us develop advanced probabilistic methods?