Implementing Responsible AI
in
Workshop: Joint Workshop on AI for Social Good
Abstract
This panel will discuss what might be some practical solutions for encouraging and implementing responsible AI. There will be time for audience Q&A.
Audience members are invited to submit questions at https://app.sli.do/event/kfdhmkbd/live/questions
Facilitator: Brian Patrick Green is Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University. His interests include AI and ethics, the ethics of space exploration and use, the ethics of technological manipulation of humans, the ethics of catastrophic risk, and the intersection of human society and technology, including religion and technology. Green teaches AI ethics in the Graduate School of Engineering and is co-author of the Ethics in Technology Practice corporate technology ethics resources.
Speakers bio: Wendell Wallach is an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. He is a consultant, an ethicist, and a scholar at Yale University’s Interdisciplinary Center for Bioethics, where he chairs the working research group on technology and ethics. He is co-author (with Colin Allen) of Moral Machines: Teaching Robots Right from Wrong, which maps the new field variously called machine ethics, machine morality, computational morality, and friendly AI. His latest book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. Wallach is the principal investigator of a Hastings Center project on the control and responsible innovation in the development of autonomous machines.
Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas.
Nenad Tomasev: My research interests lie at the intersection of theory and impactful real-world AI applications, with a particular focus on AI in healthcare, which I have been pursuing at DeepMind since early 2016. In our most recent work, published in Nature in July 2019, we demonstrate how deep learning can be used for accurate early predictions of patient deterioration from electronic health records and alerting that opens possibilities for timely interventions and preventative care. Prior to moving to London, I had been involved with other applied projects at Google, such as Email Intelligence and the Chrome Data team. I obtained my PhD in 2013 from the Artificial Intelligence Laboratory at JSI in Slovenia, where I was working on better understanding the consequences of the curse of dimensionality in instance-based learning in many dimensions.
Jingying Yang is a Program Lead on the Research team at the Partnership on AI, where she leads a portfolio of collaborative multistakeholder projects on the topics of safety, fairness, transparency, and accountability, including the ABOUT ML project to set new industry norms on ML documentation. Previously, she worked in Product Operations at Lyft, for the state of Massachusetts on health care policy, and in management consulting at Bain & Company.
Libby Kinsey: Libby is lead technologist for AI at Digital Catapult, the UK's advanced digital technology innovation centre, where she works with a multi-disciplinary team to support organisations in building their AI capabilities responsibly. She spent her early career in technology venture capital before returning to university to study machine learning in 2014.