Skip to yearly menu bar Skip to main content


Invited Talk

Powering the next 100 years

Dec 4, 5:30 PM - 6:20 PM Hall A
My goal is to let everyone on Earth be able to use the same amount of energy per year as the average U.S. citizen does today. To reach this goal by 2100 will require 0.2 Yottajoules: 0.2 x 10^24 Joules of energy, which is an astounding amount of energy. How can human civilization obtain this much energy without flooding the atmosphere with carbon dioxide? To answer this question, I'll first dive into the economics of electricity, in order to understand the limits of current zero-carbon technologies. These limits cause us to investigate zero-carbon technologies that are still being developed, such as fusion energy. For fusion, I'll show why it's been a tough problem for almost 70 years, and why there may be a solution in the near future. I'll also explain how we've been using machine learning and optimization to accelerate fusion research.
Speaker
John Platt

John Platt

John Platt is best known for his work in machine learning: the SMO algorithm for support vector machines and calibrating the output of models. He was an early adopter of convolutional neural networks in the 1990s. However, John has worked in many different fields: data systems, computational geometry, object recognition, media UIs, analog computation, handwriting recognition, and applied math. He has discovered two asteroids, and won a Technical Academy Award in 2006 for his work in computer graphics. John currently leads the Applied Science branch of Google Research, which works at the intersection between computer science and physical or biological science.
View full details
Invited Talk

Why AI Will Make it Possible to Reprogram the Human Genome

Dec 5, 9:00 AM - 9:50 AM Hall A
We have figured out how to write to the genome using DNA editing, but we don't know what the outcomes of genetic modifications will be. This is called the "genotype-phenotype gap". To close the gap, we need to reverse-engineer the genetic code, which is very hard because biology is too complicated and noisy for human interpretation. Machine learning and AI are needed. The data? Six billion letters per genome, hundreds of thousands of types of biomolecules, hundreds of cell types, over seven billion people on the planet. A new generation of "Bio-AI" researchers are poised to crack the problem, but we face extraordinary challenges. I'll discuss these challenges, focusing on which branches of AI and machine learning will have the most impact and why.
Speaker
Brendan J Frey

Brendan J Frey

Brendan Frey is Co-Founder and CEO of Deep Genomics, a Co-Founder of the Vector Institute for Artificial Intelligence, and a Professor of Engineering and Medicine at the University of Toronto. He is internationally recognized as a leader in machine learning and in genome biology and his group has published over a dozen papers on these topics in Science, Nature and Cell. His work on using deep learning to identify protein-DNA interactions was recently highlighted on the front cover Nature Biotechnology (2015), while his work on deep learning dates back to an early paper on what are now called variational autoencoders (Science 1995). He is a Fellow of the Royal Society of Canada, a Fellow of the Institute for Electrical and Electronic Engineers, and a Fellow of the American Association for the Advancement of Science. He has consulted for several industrial research and development laboratories in Canada, the United States and England, and has served on the Technical Advisory Board of Microsoft Research.
View full details
Invited Talk

The Trouble with Bias

Dec 5, 1:50 PM - 2:40 PM Hall A
Computer scientists are increasingly concerned about the many ways that machine learning can reproduce and reinforce forms of bias. When ML systems are incorporated into core social institutions, like healthcare, criminal justice and education, issues of bias and discrimination can be extremely serious. But what can be done about it? Part of the trouble with bias in machine learning in high-stakes decision making is that it can be the result of one or many factors: the training data, the model, the system goals, and whether the system works less well for some populations, among several others. Given the difficulty of understanding how a machine learning system produced a particular result, bias is often discovered after a system has been producing unfair results in the wild. But there is another problem as well: the definition of bias changes significantly depending on your discipline, and there are exciting approaches from other fields that have not yet been included by computer science. This talk will look at the recent literature on bias in machine learning, consider how we can incorporate approaches from the social sciences, and offer new strategies to address bias.
Speaker
Kate Crawford

Kate Crawford

Kate Crawford is a leading academic on the social and political implications of artificial intelligence. Over a 20-year career, her work has focused on understanding large-scale data systems and AI in the wider contexts of history, politics, labor, and the environment. Kate is based in New York, where she co-founded the AI Now Institute; she’s also a Senior Principal Researcher at MSR, and she’s the inaugural Visiting Chair in AI and Justice at the École Normale Supérieure for 2021. Her Anatomy of an AI System with Vladan Joler – which maps the full lifecycle of a single Amazon Echo from mines in the Congo to e-waste pits in Ghana – won the Beazley Design of the Year Award in 2019, and is in the permanent collection of the Museum of Modern Art in New York. Kate's forthcoming book is titled Atlas of AI: On Power, Politics and the Planetary Costs of AI (Yale 2021).
View full details
Invited Talk

The Unreasonable Effectiveness of Structure

Dec 6, 9:00 AM - 9:50 AM Hall A
Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. From information integration to scientific discovery to computational social science, we need machine learning methods that are able to exploit both the inherent uncertainty and the innate structure in a domain. Statistical relational learning (SRL) is a subfield that builds on principles from probability theory and statistics to address uncertainty while incorporating tools from knowledge representation and logic to represent structure. In this talk, I will give a brief introduction to SRL, present templates for common structured prediction problems, and describe modeling approaches that mix logic, probabilistic inference and latent variables. I’ll overview our recent work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains. I’ll close by highlighting emerging opportunities (and challenges!!) in realizing the effectiveness of data and structure for knowledge discovery.
Speaker
Lise Getoor

Lise Getoor

Lise Getoor is a professor in the Computer Science Department at the University of California, Santa Cruz. Her research areas include machine learning, data integration and reasoning under uncertainty, with an emphasis on graph and network data. She has over 250 publications and extensive experience with machine learning and probabilistic modeling methods for graph and network data. She is a Fellow of the Association for Artificial Intelligence, an elected board member of the International Machine Learning Society, serves on the board of the Computing Research Association (CRA), and was co-chair for ICML 2011. She is a recipient of an NSF Career Award and eleven best paper and best student paper awards. She received her PhD from Stanford University in 2001, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a professor in the Computer Science Department at the University of Maryland, College Park from 2001-2013.
View full details
Invited Talk

Deep Learning for Robotics

Dec 6, 1:50 PM - 2:40 PM Hall A
Speaker
Pieter Abbeel

Pieter Abbeel

Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.
View full details
Invited Talk

Learning State Representations

Dec 7, 9:00 AM - 9:50 AM Hall A
On the face of it, most real-world world tasks are hopelessly complex from the point of view of reinforcement learning mechanisms. In particular, due to the ”curse of dimensionality”, even the simple task of crossing the street should, in principle, take thousands of trials to learn to master. But we are better than that.. How does our brain do it? In this talk, I will argue that the hardest part of learning is not assigning values or learning policies, but rather deciding on the boundaries of similarity between experiences, which define the ”states” that we learn about. I will show behavioral evidence that humans and animals are constantly engaged in this representation learning process, and suggest that in a not too far future, we may be able to read out these representations from the brain, and therefore find out how the brain has mastered this complex problem. I will formalize the problem of learning a state representation in terms of Bayesian inference with infinite capacity models, and suggest that an understanding of the computational problem of representation learning can lead to insights into the machine learning problem of transfer learning, and psychological/neuroscientific questions about the interplay between memory and learning.
Speaker
Yael Niv

Yael Niv

Yael Niv received her MA in psychobiology from Tel Aviv University and her PhD from the Hebrew University in Jerusalem, having conducted a major part of her thesis research at the Gatsby Computational Neuroscience Unit in UCL. After a short postdoc at Princeton she became faculty at the Psychology Department and the Princeton Neuroscience Institute. Her lab's research focuses on the neural and computational processes underlying reinforcement learning and decision-making in humans and animals, with a particular focus on representation learning. She recently co-founded the Rutgers-Princeton Center for Computational Cognitive Neuropsychiatry, and is currently taking the research in her lab in the direction of computational psychiatry.
View full details
Invited Talk

On Bayesian Deep Learning and Deep Bayesian Learning

Dec 7, 9:50 AM - 10:40 AM Hall A
Probabilistic and Bayesian reasoning is one of the principle theoretical pillars to our understanding of machine learning. Over the last two decades, it has inspired a whole range of successful machine learning methods and influenced the thinking of many researchers in the community. On the other hand, in the last few years the rise of deep learning has completely transformed the field and led to a string of phenomenal, era-defining, successes. In this talk I will explore the interface between these two perspectives on machine learning, and through a number of projects I have been involved in, explore questions like: How can probabilistic thinking help us understand deep learning methods or lead us to interesting new methods? Conversely, how can deep learning technologies help us develop advanced probabilistic methods?
Speaker
Yee Whye Teh

Yee Whye Teh

I am a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at DeepMind. I am also an Alan Turing Institute Fellow and a European Research Council Consolidator Fellow. I obtained my Ph.D. at the University of Toronto (working with Geoffrey Hinton), and did postdoctoral work at the University of California at Berkeley (with Michael Jordan) and National University of Singapore (as Lee Kuan Yew Postdoctoral Fellow). I was a Lecturer then a Reader at the Gatsby Computational Neuroscience Unit, UCL, and a tutorial fellow at University College Oxford, prior to my current appointment. I am interested in the statistical and computational foundations of intelligence, and works on scalable machine learning, probabilistic models, Bayesian nonparametrics and deep learning. I was programme co-chair of ICML 2017 and AISTATS 2010.
View full details