Joint Workshop on AI for Social Good
Fei Fang · Joseph Bullock · Marc-Antoine Dilhac · Brian Green · natalie saltiel · Dhaval Adjodah · Jack Clark · Sean McGregor · Margaux Luck · Jonathan Penn · Tristan Sylvain · Geneviève Boucher · Sydney Swaine-Simon · Girmaw Abebe Tadesse · Myriam Côté · Anna Bethke · Yoshua Bengio

Sat Dec 14th 08:00 AM -- 06:00 PM @ East Meeting Rooms 11 + 12
Event URL: »

The accelerating pace of intelligent systems research and real world deployment presents three clear challenges for producing "good" intelligent systems: (1) the research community lacks incentives and venues for results centered on social impact, (2) deployed systems often produce unintended negative consequences, and (3) there is little consensus for public policy that maximizes "good" social impacts, while minimizing the likelihood of harm. As a result, researchers often find themselves without a clear path to positive real world impact.

The Workshop on AI for Social Good addresses these challenges by bringing together machine learning researchers, social impact leaders, ethicists, and public policy leaders to present their ideas and applications for maximizing the social good. This workshop is a collaboration of three formerly separate lines of research (i.e., this is a "joint" workshop), including researchers in applications-driven AI research, applied ethics, and AI policy. Each of these research areas are unified into a 3-track framework promoting the exchange of ideas between the practitioners of each track.

We hope that this gathering of research talent will inspire the creation of new approaches and tools, provide for the development of intelligent systems benefiting all stakeholders, and converge on public policy mechanisms for encouraging these goals.

08:00 AM Opening remarks (Opening remaks) Yoshua Bengio
08:05 AM Track 1: Producing Good Outcomes (Invited Talk and Panel) Tom G Dietterich, Carla Gomes, Miguel Luengo-Oroz, Bistra Dilkina, Julien Cornebise
10:30 AM Break <span> <a href="#"></a> </span>
11:00 AM Track 1: Producing Good Outcomes (Poster Session)
Rediet Abebe, Jon Kleinberg, Brendan Lucier, Lana Cuthbertson, Kory Mathewson, Candice Schumann, Ed Chi, Claire Babirye, Soohyun Lim, Sunayana Rane, Kehinde A. Owoeye, Giovanni Da San Martino, Masanari Kimura, Tomasz (Tomek) M Rutkowski, Wolfgang Fruehwirt, Saeyoung Rho, Marie Charpignon, Andrew Konya, Ibrahim Ben Daya, Mark Thomas, Abdul Abdulrahim, Dan Ssendiwala, Gloria Namanya, Benjamin Akera, Achut Manandhar, Heloise Greeff, Arjun Verma, Rickard Nyman, Lachlan Kermode, , Jaya Narain, Kristy Johnson, Takashi Yanagihara, Issei Sugiyama, Shanya Sharma, Manan Dey, Vikram Sarbajna, Anitha Govindaraj, Julien Cornebise, Chris Dulhanty, Jason Deglint, Jordan Bilich, Daanish Masood, Mike Varga, Carla Gomes, Tom G Dietterich, Miguel Luengo-Oroz, Bistra Dilkina, Maria Mironova, Seunghak Yu, Maya Srikanth, David Clifton, Kate Larson, Dave Levin, Nicholas J Adams-Cohen, Sarah Dean
12:00 PM Lunch - on your own <span> <a href="#"></a> </span>
02:00 PM Track 2: From Malicious Use to Responsible AI (Poster Session) Roel Dobbe, Farah Shamout, David Clifton, Jess Whittlestone, Libby Kinsey, Anat Elhalal, Nikesh Bajaj, Julie Wall, Nenad Tomasev, Brian Green
03:00 PM Break <span> <a href="#"></a> </span>
03:30 PM Track 2: From Malicious Use to Responsible AI (Invited Talk and Panel) Jingying Yang, Patrick Lin, Nenad Tomasev, Irina Raicu, Subbu Vincent Vincent
04:00 PM Track 3: Public Policy (Poster Session) Yi Sun, Kalyan Veeramachaneni, Ivan Ramirez Diaz, Alfredo Cuesta-Infante, Hadi Elzayn, Jevgenij Gamper, Agnes Schim van der Loeff, Ben Green

Author Information

Fei Fang (Carnegie Mellon University)
Joseph Bullock (UN Global Pulse and Durham University)

Joseph is an Artificial Intelligence Research Fellow at UN Global Pulse, an innovation initiative part of the Executive Office of the UN Secretary General to harness emerging technologies for humanitarian good. His work at UN Global Pulse includes research in both computer vision and natural language processing (NLP), with specific projects such as: the development of an AI based satellite image analysis tool for refugee camp mapping, damage detection and flood mapping; NLP applications to multilingual social media analysis and topic exploration; and an assessment of the political and social risks of automated text generation. Joseph is also an Industrial Research Associate at the RiskEcon Lab, part of the Courant Institute of Mathematical Sciences at New York University, and a Doctoral Researcher in Data Intensive Science at Durham University. He has worked with companies in the medical, utility, and insurance industries and spoken widely on the use of AI in the humanitarian and health sectors, as well as Applied Ethics in Data Analytics.

Marc-Antoine Dilhac (Université de Montréal)
Brian Green (Santa Clara University)
natalie saltiel (MIT)
Dhaval Adjodah (MIT)

Dhaval is a 4th year PhD candidate at the Media Lab doing research in AI and Finance. He previously worked in finance after doing his masters in the MIT Technology Policy Program, and undergraduate in Physics also at MIT. He is interested in understanding how to optimally organize networks of human and AI agents, and how large groups of people can sense new information, and take action collectively. His work is relevant to improving deep reinforcement learning algorithms, improving financial trading, rewiring collaboration networks, crowd-sourcing, voting, and innovation. He grew up in Mauritius, and greatly enjoys cooking and running. PhD Thesis abstract: I first investigate the cognitive limits humans suffer from - and the inductive biases we employ to overcome these limits - and how to build systems to improve our learning and decision-making in the face of these biases and limits. Secondly, I show how some of these inductive biases can be applied to enable modern machine learning to be more interpretable, robust and sample-efficient. Humans have been shown to employ various inductive biases in how they learn: we show preference for certain approaches to improve sample complexity and minimize communication cost. Previous work has documented in detail when such inductive biases fail. However, when learning is distributed or when data is sparse, such inductive biases can lead to improved learning. In a first study, I observe that humans in a large online experiment make strong assumptions on the distributional properties of the data, and that they prefer to learn from other people's compressed estimates rather than from actual data. I then create novel metric - a measure of social learning - to identify individuals who improve the accuracy of the group. In another domain where hundreds of thousands of traders have to decide which other traders to learn strategies from to make their trades, I observe strong Dunbar cognitive limits. Specifically, I observe a tradeoff between frequency of data update and number of people to follow, and test recommender bots that improve trader performance without increasing cognitive load. Although inductive biases can lead to sub-optimal behavior, there is an increasing movement to imbue more inductive biases into modern machine learning to make it more sample efficient, robust and interpretable. I show, for example, that the inductive bias of humans to use certain network topologies to perform decentralized optimization can be applied to modern deep learning: I observe strong improvements in large-scale deep reinforcement learning by forcing gradient updates to be communicated over synthetic optimized random graphs. In another project, I create a novel 'relational unit' and demonstrate that it strongly outperforms other state-of-the-art reinforcement approaches (MLP, multi-head attention, partitioned DQN) due to the imbued relational inductive bias of our model. I also observe that the relations learned are clearly interpretable. Such work not only provides tangible contributions to making both human learning more accurate in more realistic settings, and helping machine learning be more sample-efficient on harder problems, but also provides suggestions as to how to make human-AI collaboration more effective.

Jack Clark (OpenAI)
Sean McGregor (Syntiant and XPRIZE)

Sean defended a PhD at Oregon State University in 2017. His research focuses on solving real world problems with machine learning and visual analytics, including problems in wildfire suppression, heliophysics, and analog neural network computation. Outside his research, Sean serves as technical lead for the IBM Watson AI XPRIZE and representative to the Partnership on AI for the Safety Critical AI and Fair, Transparent, and Accountable (FTA) working groups. Sean's "day job" is developing neural networks to run on analog architectures at Syntiant.

Margaux Luck (MILA)
Jonnie Penn (University of Cambridge)

Author, technologist, and historian. Interested in the societal implications of AI over time. PhD candidate in the History and Philosophy of Science Department at the University of Cambridge. Studies the history of AI in the twentieth century. Currently a visiting scholar at MIT. Prior Google Technology Policy Fellow, Assembly Fellow at the MIT Media Lab/Berkman Kline Centre. Holds degrees from the University of Cambridge and McGill University.

Tristan Sylvain (MILA)
Geneviève Boucher (IRIC)
Sydney Swaine-Simon (District 3)
Girmaw Abebe Tadesse (University of Oxford)
Myriam Côté (Mila)
Anna Bethke (Intel)
Yoshua Bengio (Mila)

Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.

More from the Same Authors