Invited Talks
Predictive Learning
Deep learning has been at the root of significant progress in many application areas, such as computer perception and natural language processing. But almost all of these systems currently use supervised learning with human-curated labels. The challenge of the next several years is to let machines learn from raw, unlabeled data, such as images, videos and text. Intelligent systems today do not possess "common sense", which humans and animals acquire by observing the world, acting in it, and understanding the physical constraints of it. I will argue that allowing machine to learn predictive models of the world is key to significant progress in artificial intelligence, and a necessary component of model-based planning and reinforcement learning. The main technical difficulty is that the world is only partially predictable. A general formulation of unsupervised learning that deals with partial predictability will be presented. The formulation connects many well-known approaches to unsupervised learning, as well as new and exciting ones such as adversarial
training.
Speaker
Yann Lecun
Intelligent Biosphere
The biosphere is a stupendously complex and poorly understood system, which we depend on for our survival, and which we are attacking on every front. Worrying. But what has that got to do with machine learning and AI? I will explain how the complexity and stability of the entire biosphere depend on, and select for, the intelligence of the individual organisms that comprise it; why simulations of ecological tasks in naturalistic environments could be an important test bed for Artificial General Intelligence, AGI; how new technology and machine learning are already giving us a deeper understanding of life on Earth; and why AGI is needed to maintain the biosphere in a state that is compatible with the continued existence of human civilization.
Speaker
Drew Purves
A teenage interest in the emergent dynamics of self-interested, evolving, interacting agents, sparked by the Artificial Life movement, was Drewâs route into studying real ecology at Cambridge, York, and Princeton. Throughout, his focus was on developing realistic simulation models of ecological processes, something that he was able to scale up hugely during his 8 years as head of the Computational Ecology and Environmental Science group (CEES) at Microsoft Research, which developed many such models, at spatiotemporal scales from millimetres to global, seconds to centuries. CEES built the first fully data-constrained model of the global carbon cycle, and The Madingley Model, which simulates the key ecological interactions among nearly all macroorganisms on Earth. From a technical perspective, CEES specialized in Bayesian approaches to constraining esoteric nonlinear ecological models to heterogeneous data, developing new methods and software tools to facilitate such an approach, from algorithms such as Filzbach, to geotemporal software such as FetchClimate. In November 2015, after 20 years devoted to ecological research, Purves changed tack to join DeepMindâs mission to create General Artificial Intelligence.
Engineering Principles From Stable and Developing Brains
Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. First, I will describe a joint computational-experimental approach to explore how neural networks in the brain form during development. I will discuss how the brain uses a very uncommon and surprising strategy to build networks and how this idea can be used to enhance the design and function of energy-efficient distributed networks. Second, I will describe how two fundamental plasticity rules (LTP and LTD) help neural networks approach desirable synaptic weight distributions in a gradient-descent-like manner. I will derive connections between different experimentally-derived forms of these rules and distributed algorithms commonly used to regulate traffic flow on the Internet. Our work is motivated by the study of “algorithms in nature”.
Speaker
Saket Navlakha
Saket Navlakha is an assistant professor at the Salk Institute for Biological Studies.He received an A.A. from Simon's Rock College in 2002, a B.S. from Cornell University in 2005, and a Ph.D. in computer science from the University of Maryland College Park in 2010. He was a post-doctoral researcher in the Machine Learning Department at Carnegie Mellon University from 2011-2014. His research interests include the design of algorithms for understanding large biological networks and the study of algorithms in nature.
Machine Learning and Likelihood-Free Inference in Particle Physics
Particle physics aims to answer profound questions about the fundamental building blocks of the Universe through enormous data sets collected at experiments like the Large Hadron Collider at CERN. Inference in this context involves two extremes. On one hand the theories of fundamental particle interactions are described by quantum field theory, which is elegant, highly constrained, and highly predictive. On the other hand, the observations come from interactions with complex sensor arrays with uncertain response, which lead to intractable likelihoods. Machine learning techniques with high-capacity models offer a promising set of tools for coping with the complexity of the data; however, we ultimately want to perform inference in the language of quantum field theory. I will discuss likelihood-free inference, generative models, adversarial training, and other recent progress in machine learning from this point of view.
Speaker
Kyle Cranmer
Kyle Cranmer is a Professor of Physics, Computer Science, and Statistics and the Director of the Data Science Institute at the University of Wisconsin--Madison.
Dynamic Legged Robots
A new generation of high-performance robots is leaving the laboratory and entering the world. They operate in offices, homes and the field, where ordinary vehicles can not go. They use sensors to see the world around them in order to navigate, interact and understand. Their agility, dexterity, autonomy and intelligence are evolving in ways that promise to free us from the tasks that no human should have to perform. The presentation will give a status report on the work Boston Dynamics is doing to help develop advanced mobile manipulation robots.
Speaker
Marc Raibert
Marc Raibert founded Boston Dynamics in 1992 as a spin-off from MIT. Boston Dynamics develops some of the world's most advanced dynamic robots, such as BigDog, Atlas, Cheetah and Spot. These robots are inspired by the remarkable ability of animals to move with agility, mobility, dexterity and speed. A key ingredient of these robots is their dynamic behavior, which contributes to their effectiveness in real-world tasks and their life-like qualities. Before starting Boston Dynamics, Raibert was Professor of Electrical Engineering and Computer Science at MIT from 1986 to 1995. Before that he was Associate Professor of Computer Science and a member of the Robotics Institute at Carnegie Mellon from 1980 to 1986. While at CMU and MIT Raibert founded the Leg Laboratory, a lab that helped establish the scientific basis for highly dynamic robots. Raibert has been a member of the National Academy of Engineering since 2008.
Learning About the Brain: Neuroimaging and Beyond
Quantifying mental states and identifying "statistical biomarkers" of mental disorders from neuroimaging data is an exciting and rapidly growing research area at the intersection of neuroscience and machine learning. Given the focus on gaining better insights about the brain functioning, rather than just learning accurate "black-box" predictors, interpretability and reproducibility of learned models become particularly important in this field. We will discuss promises and limitations of machine learning in neuroimaging, and lessons learned from applying various approaches, from sparse models to deep neural nets, to a wide range of neuroimaging studies involving pain perception, schizophrenia, cocaine addiction and other mental disorders. Moreover, we will also go "beyond the scanner" and discuss some recent work on inferring mental states from relatively cheap and easily collected data, such as speech and wearable sensors, with applications ranging from clinical settings ("computational psychiatry") to everyday life ("augmented human").
Speaker
Reproducible Research: the Case of the Human Microbiome
Modern data sets usually present multiple levels of heterogeneity,
some apparent such as the necessity of combining trees, graphs, contingency tables
and continuous covariates, others concern latent factors and gradients.
The biggest challenge in the analyses of these data comes from the necessity to
maintain and percolate uncertainty throughout the analyses. I will present a
completely reproducible workflow that combines the typical kernel multidimensional scaling approaches with Bayesian nonparametrics to arrive at visualizations that present honest projection regions.
This talk will include joint work with Kris Sankaran, Julia Fukuyama, Lan Huong Nguyen, Ben Callahan, Boyu Ren, Sergio Bacallado, Stefano Favaro, Lorenzo Trippa and the members of Dr Relman's research group at Stanford.
Speaker
Susan Holmes
Brought up in the French School of Data Analysis (Analyse des Données) in the 1980's, Professor Holmes specializes in exploring and visualizing complex biological data.
She is interested in integrating the information provided by phylogenetic trees, community interaction graphs and metabolic networks with sequencing data and clinical covariates. She uses computational statistics, and Bayesian methods to draw inferences about many complex biological phenomena such as the human microbiome or the interactions between the immune system and cancer.
She teaches using R and BioConductor and tries to make everything she does freely available.
Successful Page Load