Invited Talks
(Posner Lecture)
Yann LeCun

[ area 1 + 2 ]

Deep learning has been at the root of significant progress in many application areas, such as computer perception and natural language processing. But almost all of these systems currently use supervised learning with human-curated labels. The challenge of the next several years is to let machines learn from raw, unlabeled data, such as images, videos and text. Intelligent systems today do not possess "common sense", which humans and animals acquire by observing the world, acting in it, and understanding the physical constraints of it. I will argue that allowing machine to learn predictive models of the world is key to significant progress in artificial intelligence, and a necessary component of model-based planning and reinforcement learning. The main technical difficulty is that the world is only partially predictable. A general formulation of unsupervised learning that deals with partial predictability will be presented. The formulation connects many well-known approaches to unsupervised learning, as well as new and exciting ones such as adversarial training.

Drew Purves

[ Area 1+2 ]

The biosphere is a stupendously complex and poorly understood system, which we depend on for our survival, and which we are attacking on every front. Worrying. But what has that got to do with machine learning and AI? I will explain how the complexity and stability of the entire biosphere depend on, and select for, the intelligence of the individual organisms that comprise it; why simulations of ecological tasks in naturalistic environments could be an important test bed for Artificial General Intelligence, AGI; how new technology and machine learning are already giving us a deeper understanding of life on Earth; and why AGI is needed to maintain the biosphere in a state that is compatible with the continued existence of human civilization.

Saket Navlakha

[ Area 1 + 2 ]

Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. First, I will describe a joint computational-experimental approach to explore how neural networks in the brain form during development. I will discuss how the brain uses a very uncommon and surprising strategy to build networks and how this idea can be used to enhance the design and function of energy-efficient distributed networks. Second, I will describe how two fundamental plasticity rules (LTP and LTD) help neural networks approach desirable synaptic weight distributions in a gradient-descent-like manner. I will derive connections between different experimentally-derived forms of these rules and distributed algorithms commonly used to regulate traffic flow on the Internet. Our work is motivated by the study of “algorithms in nature”.

Kyle Cranmer

[ Area 1 + 2 ]

Particle physics aims to answer profound questions about the fundamental building blocks of the Universe through enormous data sets collected at experiments like the Large Hadron Collider at CERN. Inference in this context involves two extremes. On one hand the theories of fundamental particle interactions are described by quantum field theory, which is elegant, highly constrained, and highly predictive. On the other hand, the observations come from interactions with complex sensor arrays with uncertain response, which lead to intractable likelihoods. Machine learning techniques with high-capacity models offer a promising set of tools for coping with the complexity of the data; however, we ultimately want to perform inference in the language of quantum field theory. I will discuss likelihood-free inference, generative models, adversarial training, and other recent progress in machine learning from this point of view.

Marc Raibert

[ Area 1 + 2 ]

A new generation of high-performance robots is leaving the laboratory and entering the world. They operate in offices, homes and the field, where ordinary vehicles can not go. They use sensors to see the world around them in order to navigate, interact and understand. Their agility, dexterity, autonomy and intelligence are evolving in ways that promise to free us from the tasks that no human should have to perform. The presentation will give a status report on the work Boston Dynamics is doing to help develop advanced mobile manipulation robots.

Irina Rish

[ Area 1 + 2 ]

Quantifying mental states and identifying "statistical biomarkers" of mental disorders from neuroimaging data is an exciting and rapidly growing research area at the intersection of neuroscience and machine learning. Given the focus on gaining better insights about the brain functioning, rather than just learning accurate "black-box" predictors, interpretability and reproducibility of learned models become particularly important in this field. We will discuss promises and limitations of machine learning in neuroimaging, and lessons learned from applying various approaches, from sparse models to deep neural nets, to a wide range of neuroimaging studies involving pain perception, schizophrenia, cocaine addiction and other mental disorders. Moreover, we will also go "beyond the scanner" and discuss some recent work on inferring mental states from relatively cheap and easily collected data, such as speech and wearable sensors, with applications ranging from clinical settings ("computational psychiatry") to everyday life ("augmented human").

(Breiman Lecture)
Susan Holmes

[ Area 1 + 2 ]

Modern data sets usually present multiple levels of heterogeneity, some apparent such as the necessity of combining trees, graphs, contingency tables and continuous covariates, others concern latent factors and gradients. The biggest challenge in the analyses of these data comes from the necessity to maintain and percolate uncertainty throughout the analyses. I will present a completely reproducible workflow that combines the typical kernel multidimensional scaling approaches with Bayesian nonparametrics to arrive at visualizations that present honest projection regions.

This talk will include joint work with Kris Sankaran, Julia Fukuyama, Lan Huong Nguyen, Ben Callahan, Boyu Ren, Sergio Bacallado, Stefano Favaro, Lorenzo Trippa and the members of Dr Relman's research group at Stanford.