Joint Workshop on AI for Social Good
Fei Fang · Joseph Bullock · Marc-Antoine Dilhac · Brian Green · natalie saltiel · Dhaval Adjodah · Jack Clark · Sean McGregor · Margaux Luck · Jonathan Penn · Tristan Sylvain · Geneviève Boucher · Sydney Swaine-Simon · Girmaw Abebe Tadesse · Myriam Côté · Anna Bethke · Yoshua Bengio

Sat Dec 14th 08:00 AM -- 06:00 PM @ East Meeting Rooms 11 + 12
Event URL: »

The accelerating pace of intelligent systems research and real world deployment presents three clear challenges for producing "good" intelligent systems: (1) the research community lacks incentives and venues for results centered on social impact, (2) deployed systems often produce unintended negative consequences, and (3) there is little consensus for public policy that maximizes "good" social impacts, while minimizing the likelihood of harm. As a result, researchers often find themselves without a clear path to positive real world impact.

The Workshop on AI for Social Good addresses these challenges by bringing together machine learning researchers, social impact leaders, ethicists, and public policy leaders to present their ideas and applications for maximizing the social good. This workshop is a collaboration of three formerly separate lines of research (i.e., this is a "joint" workshop), including researchers in applications-driven AI research, applied ethics, and AI policy. Each of these research areas are unified into a 3-track framework promoting the exchange of ideas between the practitioners of each track.

We hope that this gathering of research talent will inspire the creation of new approaches and tools, provide for the development of intelligent systems benefiting all stakeholders, and converge on public policy mechanisms for encouraging these goals.

08:00 AM Opening remarks (Opening remaks) Yoshua Bengio
08:05 AM Computational Sustainability: Computing for a Better World and a Sustainable Future (Invited Talk Track 1) Carla Gomes
08:25 AM Translating AI Research into operational impact to achieve the Sustainable Development Goals (Invited Talk Track 1) Miguel Luengo-Oroz
08:45 AM Sacred Waveforms: An Indigenous Perspective on the Ethics of Collecting and Usage of Spiritual Data for Machine Learning (Invited Talk Track 3) Michael Running Wolf, Caroline Running Wolf
09:15 AM Balancing Competing Objectives for Welfare-Aware Machine Learning with Imperfect Data (Contributed Talk Track 1) Esther Rolf
09:20 AM Dilated LSTM with ranked units for Classification of suicide note (Contributed Talk Track 1) Annika Schoene
09:25 AM Speech in Pixels: Automatic Detection of Offensive Memes for Moderation (Contributed Talk Track 1) Xavi Giro-i-Nieto
09:30 AM Towards better healthcare: What could and should be automated? (Contributed Talk Track 1) Wolfgang Fruehwirt
09:35 AM All Tracks Poster Session (Poster Session)
09:45 AM Break / All Tracks Poster Session (Poster Session)
10:30 AM Towards a Social Good? Theories of Change in AI (Panel Discussion Track 3) natalie saltiel, Rashida Richardson, Sarah T. Hamid
11:30 AM Hard Choices in AI Safety (Contributed Talk Track 2) Roel Dobbe, Thomas Gilbert, Yonatan Mintz
11:35 AM The Effects of Competition and Regulation on Error Inequality in Data-driven Markets (Contributed Talk Track 3) Hadi Elzayn
11:40 AM Learning Fair Classifiers in Online Stochastic Setting (Contributed Talk Track 3) Yi Sun
11:45 AM Fraud detection in telephone conversations for financial services using linguistic features (Contributed Talk Track 2)
11:50 AM A Typology of AI Ethics Tools, Methods and Research to Translate Principles into Practices (Contributed Talk Track 2) Libby Kinsey
11:55 AM AI Ethics for Systemic Issues: A Structural Approach (Contributed Talk Track 3) Agnes Schim van der Loeff
12:00 PM Lunch - on your own <span> <a href="#"></a> </span>
02:00 PM ML system documentation for transparency and responsible AI development - a process and an artifact (Invited Talk Track 2) Jingying Yang
02:05 PM Beyond Principles and Policy Proposals: A framework for the agile governance of AI (Invited Talk Track 2) Wendell Wallach
02:30 PM Untangling AI Ethics: Working Toward a Root Issue (Invited Talk Track 2) Patrick Lin
02:45 PM AI in Healthcare: Working Towards Positive Clinical Impact (Invited Talk Track 2) Nenad Tomasev
03:00 PM Implementing Responsible AI (Panel Discussion Track 2) Brian Green, Wendell Wallach, Patrick Lin, Nenad Tomasev, Jingying Yang, Libby Kinsey
03:30 PM Break / All Tracks Poster Session (Poster Session)
04:15 PM "Good" isn't good enough (Contributed Talk Track 3) Ben Green
04:20 PM Automated Quality Control for a Weather Sensor Network (Invited Talk Track 1) Tom G Dietterich
04:50 PM AI and Sustainable Development (Panel Discussion Track 1) Fei Fang, Carla Gomes, Miguel Luengo-Oroz, Tom G Dietterich, Julien Cornebise
05:50 PM Open announcement and Best Papers/Posters Award <span> <a href="#"></a> </span>

Author Information

Fei Fang (Carnegie Mellon University)
Joseph Bullock (UN Global Pulse and Durham University)

Joseph is an Artificial Intelligence Research Fellow at UN Global Pulse, an innovation initiative part of the Executive Office of the UN Secretary General to harness emerging technologies for humanitarian good. His work at UN Global Pulse includes research in both computer vision and natural language processing (NLP), with specific projects such as: the development of an AI based satellite image analysis tool for refugee camp mapping, damage detection and flood mapping; NLP applications to multilingual social media analysis and topic exploration; and an assessment of the political and social risks of automated text generation. Joseph is also an Industrial Research Associate at the RiskEcon Lab, part of the Courant Institute of Mathematical Sciences at New York University, and a Doctoral Researcher in Data Intensive Science at Durham University. He has worked with companies in the medical, utility, and insurance industries and spoken widely on the use of AI in the humanitarian and health sectors, as well as Applied Ethics in Data Analytics.

Marc-Antoine Dilhac (Université de Montréal)
Brian Green (Santa Clara University)
natalie saltiel (MIT)
Dhaval Adjodah (MIT)

Dhaval is a 4th year PhD candidate at the Media Lab doing research in AI and Finance. He previously worked in finance after doing his masters in the MIT Technology Policy Program, and undergraduate in Physics also at MIT. He is interested in understanding how to optimally organize networks of human and AI agents, and how large groups of people can sense new information, and take action collectively. His work is relevant to improving deep reinforcement learning algorithms, improving financial trading, rewiring collaboration networks, crowd-sourcing, voting, and innovation. He grew up in Mauritius, and greatly enjoys cooking and running. PhD Thesis abstract: I first investigate the cognitive limits humans suffer from - and the inductive biases we employ to overcome these limits - and how to build systems to improve our learning and decision-making in the face of these biases and limits. Secondly, I show how some of these inductive biases can be applied to enable modern machine learning to be more interpretable, robust and sample-efficient. Humans have been shown to employ various inductive biases in how they learn: we show preference for certain approaches to improve sample complexity and minimize communication cost. Previous work has documented in detail when such inductive biases fail. However, when learning is distributed or when data is sparse, such inductive biases can lead to improved learning. In a first study, I observe that humans in a large online experiment make strong assumptions on the distributional properties of the data, and that they prefer to learn from other people's compressed estimates rather than from actual data. I then create novel metric - a measure of social learning - to identify individuals who improve the accuracy of the group. In another domain where hundreds of thousands of traders have to decide which other traders to learn strategies from to make their trades, I observe strong Dunbar cognitive limits. Specifically, I observe a tradeoff between frequency of data update and number of people to follow, and test recommender bots that improve trader performance without increasing cognitive load. Although inductive biases can lead to sub-optimal behavior, there is an increasing movement to imbue more inductive biases into modern machine learning to make it more sample efficient, robust and interpretable. I show, for example, that the inductive bias of humans to use certain network topologies to perform decentralized optimization can be applied to modern deep learning: I observe strong improvements in large-scale deep reinforcement learning by forcing gradient updates to be communicated over synthetic optimized random graphs. In another project, I create a novel 'relational unit' and demonstrate that it strongly outperforms other state-of-the-art reinforcement approaches (MLP, multi-head attention, partitioned DQN) due to the imbued relational inductive bias of our model. I also observe that the relations learned are clearly interpretable. Such work not only provides tangible contributions to making both human learning more accurate in more realistic settings, and helping machine learning be more sample-efficient on harder problems, but also provides suggestions as to how to make human-AI collaboration more effective.

Jack Clark (OpenAI)
Sean McGregor (Syntiant and XPRIZE)

Sean defended a PhD at Oregon State University in 2017. His research focuses on solving real world problems with machine learning and visual analytics, including problems in wildfire suppression, heliophysics, and analog neural network computation. Outside his research, Sean serves as technical lead for the IBM Watson AI XPRIZE and representative to the Partnership on AI for the Safety Critical AI and Fair, Transparent, and Accountable (FTA) working groups. Sean's "day job" is developing neural networks to run on analog architectures at Syntiant.

Margaux Luck (MILA)
Jonnie Penn (University of Cambridge)

Author, technologist, and historian. Interested in the societal implications of AI over time. PhD candidate in the History and Philosophy of Science Department at the University of Cambridge. Studies the history of AI in the twentieth century. Currently a visiting scholar at MIT. Prior Google Technology Policy Fellow, Assembly Fellow at the MIT Media Lab/Berkman Kline Centre. Holds degrees from the University of Cambridge and McGill University.

Tristan Sylvain (MILA)
Geneviève Boucher (IRIC)
Sydney Swaine-Simon (District 3)
Girmaw Abebe Tadesse (University of Oxford)
Myriam Côté (Mila)
Anna Bethke (Intel)
Yoshua Bengio (Mila)

Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.

More from the Same Authors