Skip to yearly menu bar Skip to main content



Invited Talks
(Posner Lecture)
Joelle Pineau

[ Room 220 CD ]

We have seen significant achievements with deep reinforcement learning in recent years. Yet reproducing results for state-of-the-art deep RL methods is seldom straightforward. High variance of some methods can make learning particularly difficult when environments or rewards are strongly stochastic. Furthermore, results can be brittle to even minor perturbations in the domain or experimental procedure. In this talk, I will review challenges that arise in experimental techniques and reporting procedures in deep RL. I will also describe several recent results and guidelines designed to make future results more reproducible, reusable and robust.

Michael Levin

[ Room 220 CD ]

Brains are not unique in their computational abilities. Bacteria, plants, and unicellular organisms exhibit learning and plasticity; nervous systems speed-optimized information-processing that is ubiquitous across the tree of life and was already occurring at multiple scales before neurons evolved. Non-neural computation is especially critical for enabling individual cells to coordinate their activity toward the creation and repair of complex large-scale anatomies. We have found that bioelectric signaling enables all types of cells to form networks that store pattern memories that guide large-scale growth and form. In this talk, I will introduce the basics of developmental bioelectricity, and show how novel conceptual and methodological advances have enabled rewriting pattern memories that guide morphogenesis without genomic editing. In effect, these strategies allow reprogramming the bioelectric software that implements multicellular patterning goal states. I will show examples of applications in regenerative medicine and cognitive neuroplasticity, and illustrate future impacts on synthetic bioengineering, robotics, and machine learning.

Edward W Felten

[ Room 220 CD ]

AI and Machine Learning are already having a big impact on the world. Policymakers have noticed, and they are starting to formulate laws and regulations, and to convene conversations, about how society will govern the development of these technologies. This talk will give an overview of how policymakers deal with new technologies, how the process might develop in the case of AI/ML, and why constructive engagement with the policy process will lead to better outcomes for the field, for governments, and for society.

Kunle Olukotun

[ Room 220 CD ]

The use of machine learning to generate models from data is replacing traditional software development for many applications. This fundamental shift in how we develop software, known as Software 2.0, has provided dramatic improvements in the quality and ease of deployment for these applications. The continued success and expansion of the Software 2.0 approach must be powered by the availability of powerful, efficient and flexible computer systems that are tailored for machine learning applications. This talk will describe a design approach that optimizes computer systems to match the requirements of machine learning applications. The full-stack design approach integrates machine learning algorithms that are optimized for the characteristics of applications and the strengths of modern hardware, domain-specific languages and advanced compilation technology designed for programmability and performance, and hardware architectures that achieve both high flexibility and high energy efficiency.

(Breiman Lecture)
David Spiegelhalter

[ Room 220 CD ]

The demand for transparency, explainability and empirical validation of automated advice systems is not new. Back in the 1980s there were (occasionally acrimonious) discussions between proponents of rule-based systems and those based on statistical models, partly based on which were more transparent. A four-stage process of evaluation of medical advice systems was established, based on that used in drug development. More recently, EU legislation has focused attention on the ability of algorithms to, if required, show their workings. Inspired by Onora O'Neill's emphasis on demonstrating trustworthiness, and her idea of 'intelligent transparency', we should ideally be able to check (a) the empirical basis for the algorithm, (b) its past performance, (c) the reasoning behind its current claim, including tipping points and what-ifs (d) the uncertainty around its current claim, including whether the latest case comes within its remit. Furthermore, these explanations should be open to different levels of expertise.
These ideas will be illustrated by the Predict 2.1 system for women choosing adjuvant therapy following surgery for breast cancer, which is based on a competing-risks survival regression model, and has been developed in collaboration with professional psychologists in close cooperation with clinicians and patients. Predict 2.1 has four levels of …

Ayanna Howard

[ Room 220 CD ]

As intelligent systems become more fully interactive with humans during the performance of our day- to-day activities, the role of trust must be examined more carefully. Trust conveys the concept that when interacting with intelligent systems, humans tend to exhibit similar behaviors as when interacting with other humans and thus may misunderstand the risks associated with deferring their decisions to a machine. Bias further impacts this potential risk for trust, or overtrust, in that these systems are learning by mimicking our own thinking processes, inheriting our own implicit biases. In this talk, we will discuss this phenomenon through the lens of intelligent systems that interact with people in scenarios that are realizable in the near-term.

Laura Gomez

[ Room 220 CD ]

My talk will be about how lack of diversity --> biased algorithms ---> faulty products --> unethical tech.