Skip to yearly menu bar Skip to main content



Invited Talks
Blaise Aguera y Arcas

[ West Exhibition Hall C + B3 ]

In the past decade, we’ve figured out how to build artificial neural nets that can achieve superhuman performance at almost any task for which we can define a loss function and gather or create a sufficiently large dataset. While this is unlocking a wealth of valuable applications, it has not created anything resembling a “who”, raising interesting new (and, sometimes, old) perspectives on what we really mean when we refer to “general intelligence” in big-brained animals, including ourselves. Public scrutiny has also intensified regarding a host of seemingly unrelated concerns: how can we make fair and ethical models? How can we have privacy in a world where our data are the fuel for training all of these models? Does AI at scale increase or curtail human agency? Will AI help or harm the planet ecologically, given the exponentially increasing computational loads we’ve started to see? Do we face a real risk of runaway AI without human value alignment? This talk will be technically grounded, but will also address these big questions and some non-obvious interconnections between them. We will begin with privacy and agency in today’s ML landscape, noting how new technologies for efficient on-device inference and federated computation offer …

(Posner Lecture)
Yoshua Bengio

[ West Exhibition Hall C + B3 ]

Past progress in deep learning has concentrated mostly on learning from a static dataset, mostly for perception tasks and other System 1 tasks which are done intuitively and unconsciously by humans. However, in recent years, a shift in research direction and new tools such as soft-attention and progress in deep reinforcement learning are opening the door to the development of novel deep architectures and training frameworks for addressing System 2 tasks (which are done consciously), such as reasoning, planning, capturing causality and obtaining systematic generalization in natural language processing and other applications. Such an expansion of deep learning from System 1 tasks to System 2 tasks is important to achieve the old deep learning goal of discovering high-level abstract representations because we argue that System 2 requirements will put pressure on representation learning to discover the kind of high-level concepts which humans manipulate with language. We argue that towards this objective, soft attention mechanisms constitute a key ingredient to focus computation on a few concepts at a time (a "conscious thought") as per the consciousness prior and its associated assumption that many high-level dependencies can be approximately captured by a sparse factor graph. We also argue how the agent perspective …

Jeff Heer

[ West Exhibition Hall C + B3 ]

Much contemporary rhetoric regards the prospects and pitfalls of using artificial intelligence techniques to automate an increasing range of tasks, especially those once considered the purview of people alone. These accounts are often wildly optimistic, understating outstanding challenges while turning a blind eye to the human labor that undergirds and sustains ostensibly “automated” services. This long-standing focus on purely automated methods unnecessarily cedes a promising design space: one in which computational assistance augments and enriches, rather than replaces, people’s intellectual work. This tension between agency and automation poses vital challenges for design, engineering, and society at large. In this talk we will consider the design of interactive systems that enable adaptive collaboration among people and computational agents. We seek to balance the often complementary strengths and weaknesses of each, while promoting human control and skillful action. We will review case studies in three arenas—data wrangling, exploratory visualization, and natural language translation—that integrate proactive computational support into interactive systems. To improve outcomes and support learning by both people and machines, I will describe the use of shared representations of tasks augmented with predictive models of human capabilities and actions.

[ West Exhibition Hall C + B3 ]

(Breiman Lecture)
Bin Yu

[ West Exhibition Hall C + B3 ]

Data science is a field of evidence-seeking that combines data with domain information to generate new knowledge. It addresses key considerations in AI regarding when and where data-driven solutions are reliable and appropriate. Such considerations require involvement from humans who collectively understand the domain and tools used to collect, process, and model data. Throughout the data science life cycle, these humans make judgment calls to extract information from data. Veridical data science seeks to ensure that this information is reliable, reproducible, and clearly communicated so that empirical evidence may be evaluated in the context of human decisions. Three core principles: predictability, computability, and stability (PCS) provide the foundation for veridical data science. In this talk we will present a unified PCS framework for data analysis, consisting of both a workflow and documentation, illustrated through iterative random forests and case studies from genomics and precision medicine.

Celeste Kidd

[ West Exhibition Hall C + B3 ]

This talk will discuss Kidd’s research about how people come to know what they know. The world is a sea of information too vast for any one person to acquire entirely. How then do people navigate the information overload, and how do their decisions shape their knowledge and beliefs? In this talk, Kidd will discuss research from her lab about the core cognitive systems people use to guide their learning about the world—including attention, curiosity, and metacognition (thinking about thinking). The talk will discuss the evidence that people play an active role in their own learning, starting in infancy and continuing through adulthood. Kidd will explain why we are curious about some things but not others, and how our past experiences and existing knowledge shape our future interests. She will also discuss why people sometimes hold beliefs that are inconsistent with evidence available in the world, and how we might leverage our knowledge of human curiosity and learning to design systems that better support access to truth and reality.

Dana Pe'er

[ West Exhibition Hall C + B3 ]

Biology is becoming a data science. Recent single-cell profiling technologies are creating a data deluge, wherein thousands of variables are measured for each of hundreds of thousands to millions of cells in a single dataset. The proliferation of single-cell genomic and imaging data is creating opportunities to apply machine learning approaches in order to construct a human cell atlas with enormous potential to uncover new biology—by describing the incredible diversity of our constituent cell populations, how they function, how this diversity emerges from a single cell and how processes go awry in disease. We will present success stories and computational challenges raised by these new data modalities, in both health and disease settings. Examples will include methods from manifold learning, probabilistic graphical models and deep learning.

Kafui Dzirasa

[ West Exhibition Hall C + B3 ]

Brain-wide fluctuations in local field potential oscillations reflect emergent network-level signals that mediate behavior. Cracking the code whereby these oscillations coordinate in time and space (spatiotemporal dynamics) to represent complex behaviors would provide fundamental insights into how the brain signals emotional pathology. Using machine learning, we discover a spatiotemporal dynamic network that predicts the emergence of major depressive disorder (MDD)-related behavioral dysfunction in mice subjected to chronic social defeat stress. Activity patterns in this network originate in prefrontal cortex and ventral striatum, relay through amygdala and ventral tegmental area, and converge in ventral hippocampus. This network is increased by acute threat, and it is also enhanced in three independent models of MDD vulnerability. Finally, we demonstrate that this vulnerability network is biologically distinct from the networks that encode dysfunction after stress. Thus, these findings reveal a convergent mechanism through which MDD vulnerability is mediated in the brain.