Invited Talks
Accountability and Algorithmic Bias: Why Diversity and Inclusion Matters
My talk will be about how lack of diversity --> biased algorithms ---> faulty products --> unethical tech.
Speaker
Laura Gomez
Laura I. Gómez, is the Founder and CEO at Atipica, a venture-backed startup and founding member of Project Include.
As a young immigrant to Silicon Valley, Laura grew up in Redwood City, the daughter of a single mother and nanny to several local tech leaders. At the age of 17, Laura I. Gómez had her first internship with Hewlett-Packard, which started her career in tech.
Laura has worked at Google, YouTube, Jawbone, and Twitter, where she was a
founding team member of the International team, which led Twitter’s product expansion into 50 languages and dozens of countries.
Her passion for diversity in tech extends into her startup, Atipica, as well as her
involvement with several nonprofit organisations. Her passion is to lead data-driven initiatives that allow top level leaders to understand the business benefits of machine learning in recruiting and diversity.
She has been recognized by the Department of State and Former Secretary of State Hillary Clinton for involvement in the TechWomen Program – she was the only female leader at Twitter to participate in 2012. She also serves on the board of the Institute for Technology and Public Policy alongside Lt Governor Gavin Newsom and former Secretary of State George P Schultz.
Machine Learning Meets Public Policy: What to Expect and How to Cope
AI and Machine Learning are already having a big impact on the world. Policymakers have noticed, and they are starting to formulate laws and regulations, and to convene conversations, about how society will govern the development of these technologies. This talk will give an overview of how policymakers deal with new technologies, how the process might develop in the case of AI/ML, and why constructive engagement with the policy process will lead to better outcomes for the field, for governments, and for society.
Speaker
Edward W Felten
Edward W. Felten is the Robert E. Kahn Professor of Computer Science and Public Affairs at Princeton University, and the founding Director of Princeton's Center for Information Technology Policy. He is a member of the United States Privacy and Civil Liberties Oversight Board. In 2015-2017 he served in the White House as Deputy U.S. Chief Technology Officer. In 2011-12 he served as the first Chief Technologist at the U.S. Federal Trade Commission. His research interests include computer security and privacy, and technology law and policy. He has published more than 150 papers in the research literature, and three books. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, and is a Fellow of the ACM.
What Bodies Think About: Bioelectric Computation Outside the Nervous System, Primitive Cognition, and Synthetic Morphology
Brains are not unique in their computational abilities. Bacteria, plants, and unicellular organisms exhibit learning and plasticity; nervous systems speed-optimized information-processing that is ubiquitous across the tree of life and was already occurring at multiple scales before neurons evolved. Non-neural computation is especially critical for enabling individual cells to coordinate their activity toward the creation and repair of complex large-scale anatomies. We have found that bioelectric signaling enables all types of cells to form networks that store pattern memories that guide large-scale growth and form. In this talk, I will introduce the basics of developmental bioelectricity, and show how novel conceptual and methodological advances have enabled rewriting pattern memories that guide morphogenesis without genomic editing. In effect, these strategies allow reprogramming the bioelectric software that implements multicellular patterning goal states. I will show examples of applications in regenerative medicine and cognitive neuroplasticity, and illustrate future impacts on synthetic bioengineering, robotics, and machine learning.
Speaker
Michael Levin
Michael Levin is a professor at Tufts University, and director of the Allen Discovery Center at Tufts (allencenter.tufts.edu), working on computation in the medium of living systems. His original training was in computer science; his interest in AI and philosophy of mind led to a life-long focus on embryogenesis and regeneration as quintessential systems in which to understand how biophysical processes underlie complex adaptive decision-making. He received a Ph.D. in genetics from Harvard Medical School in 1996. Now, his group (www.drmichaellevin.org) works at the interface between developmental biology, basal cognition, and computational neuroscience. Projects include the dynamics of memories during complete brain regeneration (how can a malleable living medium store cognitive content?), behavioral studies of artificial living machines and radically-altered anatomies (how can brains learn to operate bodies with novel sensory/motor structures), induction of complex organ regeneration in non-regenerative species, editing body pattern (e.g., inducing complete eyes to form out of gut tissue, repairing birth defects, and creating permanently-propagating 2-headed worms without genomic editing), tumor reprogramming. All of these projects are being pushed toward applications in regenerative medicine, as well as inspiring novel machine learning architectures and robotics approaches. The computational side of the group works on extending connectionist paradigms beyond Neural Networks, and creating software platforms for automating the inference of insights into pattern control (a bioinformatics of shape).
Reproducible, Reusable, and Robust Reinforcement Learning
We have seen significant achievements with deep reinforcement learning in recent years. Yet reproducing results for state-of-the-art deep RL methods is seldom straightforward. High variance of some methods can make learning particularly difficult when environments or rewards are strongly stochastic. Furthermore, results can be brittle to even minor perturbations in the domain or experimental procedure. In this talk, I will review challenges that arise in experimental techniques and reporting procedures in deep RL. I will also describe several recent results and guidelines designed to make future results more reproducible, reusable and robust.
Speaker
Joelle Pineau
Joelle Pineau is an Associate Professor and William Dawson Scholar at McGill University where she co-directs the Reasoning and Learning Lab. She also leads the Facebook AI Research lab in Montreal, Canada. She holds a BASc in Engineering from the University of Waterloo, and an MSc and PhD in Robotics from Carnegie Mellon University. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a recipient of NSERC's E.W.R. Steacie Memorial Fellowship (2018), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.
Investigations into the Human-AI Trust Phenomenon
As intelligent systems become more fully interactive with humans during the performance of our day- to-day activities, the role of trust must be examined more carefully. Trust conveys the concept that when interacting with intelligent systems, humans tend to exhibit similar behaviors as when interacting with other humans and thus may misunderstand the risks associated with deferring their decisions to a machine. Bias further impacts this potential risk for trust, or overtrust, in that these systems are learning by mimicking our own thinking processes, inheriting our own implicit biases. In this talk, we will discuss this phenomenon through the lens of intelligent systems that interact with people in scenarios that are realizable in the near-term.
Speaker
Ayanna Howard
Ayanna Howard, Ph.D. is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing in the College of Computing at the Georgia Institute of Technology. She also holds a faculty appointment in the School of Electrical and Computer Engineering. Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 250 peer-reviewed publications in a number of projects - from healthcare robots in the home to AI-powered STEM apps for children with diverse learning needs. Dr. Howard received her B.S. in Engineering from Brown University, and her M.S. and Ph.D. in Electrical Engineering from the University of Southern California. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being recognized as one of the 23 most powerful women engineers in the world by Business Insider. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of STEM educational products to engage children of all abilities. Prior to Georgia Tech, Dr. Howard was a senior robotics researcher at NASA's Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Institute for Robotics and Intelligent Machines, Chair of the Robotics Ph.D. program, and the Associate Chair for Faculty Development in the School of Electrical and Computer Engineering at Georgia Tech.
Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?
The demand for transparency, explainability and empirical validation of automated advice systems is not new. Back in the 1980s there were (occasionally acrimonious) discussions between proponents of rule-based systems and those based on statistical models, partly based on which were more transparent. A four-stage process of evaluation of medical advice systems was established, based on that used in drug development. More recently, EU legislation has focused attention on the ability of algorithms to, if required, show their workings. Inspired by Onora O'Neill's emphasis on demonstrating trustworthiness, and her idea of 'intelligent transparency', we should ideally be able to check (a) the empirical basis for the algorithm, (b) its past performance, (c) the reasoning behind its current claim, including tipping points and what-ifs (d) the uncertainty around its current claim, including whether the latest case comes within its remit. Furthermore, these explanations should be open to different levels of expertise.
These ideas will be illustrated by the Predict 2.1 system for women choosing adjuvant therapy following surgery for breast cancer, which is based on a competing-risks survival regression model, and has been developed in collaboration with professional psychologists in close cooperation with clinicians and patients. Predict 2.1 has four levels of explanation of the claimed potential benefits and harms of alternative treatments, and is currently used in around 25,000 clinical decisions a month worldwide.
Speaker
David Spiegelhalter
David Spiegelhalter is a statistician in the Centre for Mathematical Sciences at Cambridge University, and currently President of the Royal Statistical Society. His background is in Bayesian statistics, and after working in computer-aided diagnosis in the early 1980s, he jointly developed the Lauritzen-Spiegelhalter algorithm for exact evidence propagation in Bayesian networks. He then led the team behind the BUGS software for MCMC analysis of Bayesian models.
He is now Chair of the Winton Centre for Risk and Evidence Communication, which aims to improve the way that statistical evidence is used by health professionals, patients, lawyers and judges, media and policy-makers. This work includes the development and evaluation of front-ends for algorithms used in patient care, focusing on explanation and transparency, particularly regarding uncertainty.
He has over 200 refereed publications and is co-author of 6 textbooks, as well as The Norm Chronicles (with Michael Blastland) and Sex by Numbers. He works extensively with the media, and presented the BBC4 documentaries ‘Tails you Win: the Science of Chance” and the award-winning “Climate Change by Numbers”. He was elected Fellow of the Royal Society in 2005, and knighted in 2014 for services to medical statistics. Perhaps his greatest achievement came in 2011 when he was 7th in an episode of Winter Wipeout on BBC1.
Designing Computer Systems for Software 2.0
The use of machine learning to generate models from data is replacing traditional software development for many applications. This fundamental shift in how we develop software, known as Software 2.0, has provided dramatic improvements in the quality and ease of deployment for these applications. The continued success and expansion of the Software 2.0 approach must be powered by the availability of powerful, efficient and flexible computer systems that are tailored for machine learning applications. This talk will describe a design approach that optimizes computer systems to match the requirements of machine learning applications. The full-stack design approach integrates machine learning algorithms that are optimized for the characteristics of applications and the strengths of modern hardware, domain-specific languages and advanced compilation technology designed for programmability and performance, and hardware architectures that achieve both high flexibility and high energy efficiency.
Speaker
Kunle Olukotun
Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is well known as a pioneer in multicore processor design and the leader of the Stanford Hydra chip multipocessor (CMP) research project. Olukotun founded Afara Websystems to develop high-throughput, low-power multicore processors for server systems. The Afara multicore processor, called Niagara, was acquired by Sun Microsystems. Niagara derived processors now power all Oracle SPARC-based servers. Olukotun currently directs the Stanford Pervasive Parallelism Lab (PPL), which seeks to proliferate the use of heterogeneous parallelism in all application areas using Domain Specific Languages (DSLs). Olukotun is a member of the Data Analytics for What’s Next (DAWN) Lab which is developing infrastructure for usable machine learning. Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design and is the recipient of of the 2018 IEEE Harry H. Goode Memorial Award. Olukotun received his Ph.D. in Computer Engineering from The University of Michigan.
Successful Page Load