Invited Talks
How to Know
This talk will discuss Kidd’s research about how people come to know what they know. The world is a sea of information too vast for any one person to acquire entirely. How then do people navigate the information overload, and how do their decisions shape their knowledge and beliefs? In this talk, Kidd will discuss research from her lab about the core cognitive systems people use to guide their learning about the world—including attention, curiosity, and metacognition (thinking about thinking). The talk will discuss the evidence that people play an active role in their own learning, starting in infancy and continuing through adulthood. Kidd will explain why we are curious about some things but not others, and how our past experiences and existing knowledge shape our future interests. She will also discuss why people sometimes hold beliefs that are inconsistent with evidence available in the world, and how we might leverage our knowledge of human curiosity and learning to design systems that better support access to truth and reality.
Speaker
Celeste Kidd
Celeste Kidd is an Assistant Professor of Psychology at the University of California, Berkeley, where her lab investigates learning and belief formation. The Kidd Lab is one of few in the world that combine technologically sophisticated behavioral experiments with computational models in order to broadly understand knowledge acquisition. Her lab employs a range of methods, including eye-tracking and touchscreen testing with human infants, in order to show how learners sample information from their environment and build knowledge gradually over time. Her work has been published in PNAS, Neuron, Psychological Science, Developmental Science, and elsewhere. Her lab has received funding from NSF, DARPA, Google, the Jacobs Foundation, the Human Frontiers Science Program, and the Templeton Foundation. She is a recipient of the American Psychological Science Rising Star designation, the Glushko Dissertation Prize in Cognitive Science, and the Cognitive Science Society Computational Modeling Prize in Perception/Action. Kidd was also named as one of TIME Magazines 2017 Persons of the Year as one of the "Silence Breakers" for her advocacy for better protections for students against sexual misconduct.
Veridical Data Science
Data science is a field of evidence-seeking that combines data with domain information to generate new knowledge. It addresses key considerations in AI regarding when and where data-driven solutions are reliable and appropriate. Such considerations require involvement from humans who collectively understand the domain and tools used to collect, process, and model data. Throughout the data science life cycle, these humans make judgment calls to extract information from data. Veridical data science seeks to ensure that this information is reliable, reproducible, and clearly communicated so that empirical evidence may be evaluated in the context of human decisions. Three core principles: predictability, computability, and stability (PCS) provide the foundation for veridical data science. In this talk we will present a unified PCS framework for data analysis, consisting of both a workflow and documentation, illustrated through iterative random forests and case studies from genomics and precision medicine.
Speaker
Bin Yu
Bin Yu is Chancellor’s Professor in the Departments of Statistics and of Electrical Engineering & Computer Sciences at the University of California at Berkeley and a former chair of Statistics at UC Berkeley. Her research focuses on practice, algorithm, and theory of statistical machine learning and causal inference. Her group is engaged in interdisciplinary research with scientists from genomics, neuroscience, and precision medicine.
In order to augment empirical evidence for decision-making, they are investigating methods/algorithms (and associated statistical inference problems) such as dictionary learning, non-negative matrix factorization (NMF), EM and deep learning (CNNs and LSTMs), and heterogeneous effect estimation in randomized experiments (X-learner). Their recent algorithms include staNMF for unsupervised learning, iterative Random Forests (iRF) and signed iRF (s-iRF) for discovering predictive and stable high-order interactions in supervised learning, contextual decomposition (CD) and aggregated contextual decomposition (ACD) for phrase or patch importance extraction from an LSTM or a CNN.
She is a member of the U.S. National Academy of Sciences and Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, and the Tukey Memorial Lecturer of the Bernoulli Society in 2012. She was President of IMS (Institute of Mathematical Statistics) in 2013-2014 and the Rietz Lecturer of IMS in 2016. She received the E. L. Scott Award from COPSS (Committee of Presidents of Statistical Societies) in 2018. Moreover, Yu was a founding co-director of the Microsoft Research Asia (MSR) Lab at Peking Univeristy and is a member of the scientific advisory board at the UK Alan Turning Institute for data science and AI.
Machine Learning Meets Single-Cell Biology: Insights and Challenges
Biology is becoming a data science. Recent single-cell profiling technologies are creating a data deluge, wherein thousands of variables are measured for each of hundreds of thousands to millions of cells in a single dataset. The proliferation of single-cell genomic and imaging data is creating opportunities to apply machine learning approaches in order to construct a human cell atlas with enormous potential to uncover new biology—by describing the incredible diversity of our constituent cell populations, how they function, how this diversity emerges from a single cell and how processes go awry in disease. We will present success stories and computational challenges raised by these new data modalities, in both health and disease settings. Examples will include methods from manifold learning, probabilistic graphical models and deep learning.
Speaker
Dana Pe'er
Chair of Computational and Systems Biology program, Sloan Kettering Institute and Director of Alan and Sandra Gerry Center for Metastasis and Tumor Ecosystems. The Pe’er lab develops machine learning approaches for the analysis and interpretation of single cell data and uses these to study Cancer, Development and Immunology. Dana is member of Human Cell Atlas Organizing Committee and co-chair of its Analysis Working Group, recipient of the Burroughs Welcome Fund Career Award, NIH Director’s New Innovator Award, NSF CAREER award, Stand Up To Cancer Innovative Research Grant, Packard Fellow in Science and Engineering, Overton award, NIH Director’s Pioneer award, Lenfest Distinguished Faculty Award and Ernst W. Bertner Memorial Award.
Social Intelligence
In the past decade, we’ve figured out how to build artificial neural nets that can achieve superhuman performance at almost any task for which we can define a loss function and gather or create a sufficiently large dataset. While this is unlocking a wealth of valuable applications, it has not created anything resembling a “who”, raising interesting new (and, sometimes, old) perspectives on what we really mean when we refer to “general intelligence” in big-brained animals, including ourselves. Public scrutiny has also intensified regarding a host of seemingly unrelated concerns: how can we make fair and ethical models? How can we have privacy in a world where our data are the fuel for training all of these models? Does AI at scale increase or curtail human agency? Will AI help or harm the planet ecologically, given the exponentially increasing computational loads we’ve started to see? Do we face a real risk of runaway AI without human value alignment? This talk will be technically grounded, but will also address these big questions and some non-obvious interconnections between them. We will begin with privacy and agency in today’s ML landscape, noting how new technologies for efficient on-device inference and federated computation offer ways to scale beneficial applications without incurring many of the downsides of current mainstream methods. We will then delve deeper into the limitations of the optimization framework for ML, and explore alternative approaches involving meta-learning, evolution strategies, populations, sociality, and cultural accumulation. We hypothesize that this relatively underexplored approach to general intelligence may be both fruitful in the near term and more optimistic in its long-term outlook.
Speaker
Blaise Aguera y Arcas
Blaise leads an organization at Google AI working on both basic research and new products. Among the team’s public contributions are MobileNets, Federated Learning, Coral, and many Android and Pixel AI features. They also founded the Artists and Machine Intelligence program, and collaborate extensively with academic researchers in a variety of fields. Until 2014 Blaise was a Distinguished Engineer at Microsoft, where he worked in a variety of roles, from inventor to strategist, and led teams with strengths in machine learning, interaction design, prototyping, augmented reality, wearable computing, and graphics. Blaise has given TED talks on Seadragon and Photosynth (2007, 2012), Bing Maps (2010), and machine creativity (2016). In 2008, he was awarded MIT’s TR35 prize.
From System 1 Deep Learning to System 2 Deep Learning
Past progress in deep learning has concentrated mostly on learning from a static dataset, mostly for perception tasks and other System 1 tasks which are done intuitively and unconsciously by humans. However, in recent years, a shift in research direction and new tools such as soft-attention and progress in deep reinforcement learning are opening the door to the development of novel deep architectures and training frameworks for addressing System 2 tasks (which are done consciously), such as reasoning, planning, capturing causality and obtaining systematic generalization in natural language processing and other applications. Such an expansion of deep learning from System 1 tasks to System 2 tasks is important to achieve the old deep learning goal of discovering high-level abstract representations because we argue that System 2 requirements will put pressure on representation learning to discover the kind of high-level concepts which humans manipulate with language. We argue that towards this objective, soft attention mechanisms constitute a key ingredient to focus computation on a few concepts at a time (a "conscious thought") as per the consciousness prior and its associated assumption that many high-level dependencies can be approximately captured by a sparse factor graph. We also argue how the agent perspective in deep learning can help put more constraints on the learned representations to capture affordances, causal variables, and model transitions in the environment. Finally, we propose that meta-learning, the modularization aspect of the consciousness prior and the agent perspective on representation learning should facilitate re-use of learned components in novel ways (even if statistically improbable, as in counterfactuals), enabling more powerful forms of compositional generalization, i.e., out-of-distribution generalization based on the hypothesis of localized (in time, space, and concept space) changes in the environment due to interventions of agents.
Speaker
Yoshua Bengio
Yoshua Bengio is Full Professor in the Department of Computer Science and Operations Research at Université de Montreal, as well as the Founder and Scientific Director of Mila and the Scientific Director of IVADO. He also holds a Canada CIFAR AI Chair. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award, considered like the "Nobel prize of computing". He is a Fellow of both the Royal Society of London and Canada, an Officer of the Order of Canada, Knight of the Legion of Honor of France and member of the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology.
Mapping Emotions: Discovering Structure in Mesoscale Electrical Brain Recordings
Brain-wide fluctuations in local field potential oscillations reflect emergent network-level signals that mediate behavior. Cracking the code whereby these oscillations coordinate in time and space (spatiotemporal dynamics) to represent complex behaviors would provide fundamental insights into how the brain signals emotional pathology. Using machine learning, we discover a spatiotemporal dynamic network that predicts the emergence of major depressive disorder (MDD)-related behavioral dysfunction in mice subjected to chronic social defeat stress. Activity patterns in this network originate in prefrontal cortex and ventral striatum, relay through amygdala and ventral tegmental area, and converge in ventral hippocampus. This network is increased by acute threat, and it is also enhanced in three independent models of MDD vulnerability. Finally, we demonstrate that this vulnerability network is biologically distinct from the networks that encode dysfunction after stress. Thus, these findings reveal a convergent mechanism through which MDD vulnerability is mediated in the brain.
Speaker
Kafui Dzirasa
Kafui Dzirasa completed a PhD in Neurobiology at Duke University. His research interests focus on understanding how changes in the brain produce neurological and mental illness, and his graduate work has led to several distinctions including: the Somjen Award for Most Outstanding Dissertation Thesis, the Ruth K. Broad Biomedical Research Fellowship, the UNCF·Merck Graduate Science Research Fellowship, and the Wakeman Fellowship. Kafui obtained an MD from the Duke University School of Medicine in 2009, and he completed residency training in General Psychiatry in 2016.
Kafui received the Charles Johnson Leadership Award in 2007, and he was recognized as one of Ebony magazine’s 30 Young Leaders of the Future in February 2008. He has also been awarded the International Mental Health Research Organization Rising Star Award, the Sydney Baer Prize for Schizophrenia Research, and his laboratory was featured on CBS 60 Minutes in 2011. In 2016, he was awarded the inaugural Duke Medical Alumni Emerging Leader Award and the Presidential Early Career Award for Scientists and Engineers: The Nation’s highest award for scientists and engineers in the early stages of their independent research careers. In 2017, he was recognized as 40 under 40 in Health by the National Minority Quality Forum, and the Engineering Alumni of the Year from UMBC. He was induced into the American Society for Clinical Investigation in 2019.
Kafui has served as an Associate Scientific Advisor for the journal Science Translational Medicine, and he was a member of the Congressional-mandated Next Generation Research Initiative. He currently serves on the Editorial Advisory Board for TEDMED, and the NIH Director’s guiding committee for the BRAIN Initiative. Kafui is an Associate Professor at Duke University with appointments in the Departments of Psychiatry and Behavioral Sciences, Neurobiology, Biomedical Engineering, and Neurosurgery. His ultimate goal is to combine his research, medical training, and community experience to improve outcomes for diverse communities suffering from Neurological and Psychiatric illness.
Agency + Automation: Designing Artificial Intelligence into Interactive Systems
Much contemporary rhetoric regards the prospects and pitfalls of using artificial intelligence techniques to automate an increasing range of tasks, especially those once considered the purview of people alone. These accounts are often wildly optimistic, understating outstanding challenges while turning a blind eye to the human labor that undergirds and sustains ostensibly “automated” services. This long-standing focus on purely automated methods unnecessarily cedes a promising design space: one in which computational assistance augments and enriches, rather than replaces, people’s intellectual work. This tension between agency and automation poses vital challenges for design, engineering, and society at large. In this talk we will consider the design of interactive systems that enable adaptive collaboration among people and computational agents. We seek to balance the often complementary strengths and weaknesses of each, while promoting human control and skillful action. We will review case studies in three arenas—data wrangling, exploratory visualization, and natural language translation—that integrate proactive computational support into interactive systems. To improve outcomes and support learning by both people and machines, I will describe the use of shared representations of tasks augmented with predictive models of human capabilities and actions.
Speaker
Jeff Heer
Jeffrey Heer is the Jerre D. Noe Endowed Professor of Computer Science & Engineering at the University of Washington, where he directs the Interactive Data Lab and conducts research on data visualization, human-computer interaction, and social computing. The visualization tools developed by Jeff and his collaborators (Vega, D3.js, Protovis, Prefuse) are used by researchers, companies, and thousands of data enthusiasts around the world. Jeff's research papers have received awards at the premier venues in Human-Computer Interaction and Visualization (ACM CHI, ACM UIST, IEEE InfoVis, IEEE VAST, EuroVis). Other honors include MIT Technology Review's TR35 (2009), a Sloan Fellowship (2012), the ACM Grace Murray Hopper Award (2016), and the IEEE Visualization Technical Achievement Award (2017). Jeff holds B.S., M.S., and Ph.D. degrees in Computer Science from UC Berkeley, whom he then "betrayed" to join the Stanford faculty (2009–2013). He is also a co-founder of Trifacta, a provider of interactive tools for scalable data transformation.
Successful Page Load