Timezone: »

 
Workshop
Joint Workshop on AI for Social Good
Fei Fang · Joseph Aylett-Bullock · Marc-Antoine Dilhac · Brian Green · natalie saltiel · Dhaval Adjodah · Jack Clark · Sean McGregor · Margaux Luck · Jonathan Penn · Tristan Sylvain · Geneviève Boucher · Sydney Swaine-Simon · Girmaw Abebe Tadesse · Myriam Côté · Anna Bethke · Yoshua Bengio

Sat Dec 14 08:00 AM -- 06:00 PM (PST) @ East Meeting Rooms 11 + 12
Event URL: https://aiforsocialgood.github.io/neurips2019/ »

The accelerating pace of intelligent systems research and real world deployment presents three clear challenges for producing "good" intelligent systems: (1) the research community lacks incentives and venues for results centered on social impact, (2) deployed systems often produce unintended negative consequences, and (3) there is little consensus for public policy that maximizes "good" social impacts, while minimizing the likelihood of harm. As a result, researchers often find themselves without a clear path to positive real world impact.

The Workshop on AI for Social Good addresses these challenges by bringing together machine learning researchers, social impact leaders, ethicists, and public policy leaders to present their ideas and applications for maximizing the social good. This workshop is a collaboration of three formerly separate lines of research (i.e., this is a "joint" workshop), including researchers in applications-driven AI research, applied ethics, and AI policy. Each of these research areas are unified into a 3-track framework promoting the exchange of ideas between the practitioners of each track.

We hope that this gathering of research talent will inspire the creation of new approaches and tools, provide for the development of intelligent systems benefiting all stakeholders, and converge on public policy mechanisms for encouraging these goals.

Author Information

Fei Fang (Carnegie Mellon University)
Joseph Aylett-Bullock (UN Global Pulse and Durham University)

Joseph is a Research Associate at UN Global Pulse, an innovation initiative part of the Executive Office of the UN Secretary General to harness emerging technologies for humanitarian good. His research focusses on mathematical modelling and machine learning for crisis response and humanitarian development. His work has included: the development of an AI based satellite image analysis tool for refugee camp mapping, damage detection and flood mapping; NLP applications to multilingual social media analysis and topic exploration; and an assessment of the political and social risks of automated text generation. Most recently, Joseph is leading a team of academics and UN experts in modelling epidemic spreads in refugee and IDP settlements. Joseph is also an Industry Research Associate at the RiskEcon Lab, part of the Courant Institute of Mathematical Sciences at New York University. He has worked with companies in the medical, utility, and insurance industries and spoken widely on the use of AI in the humanitarian and health sectors, as well as Applied Ethics in Data Analytics.

Marc-Antoine Dilhac (Université de Montréal)
Brian Green (Santa Clara University)
natalie saltiel (MIT)
Dhaval Adjodah (MIT)

Dhaval is a 4th year PhD candidate at the Media Lab doing research in AI and Finance. He previously worked in finance after doing his masters in the MIT Technology Policy Program, and undergraduate in Physics also at MIT. He is interested in understanding how to optimally organize networks of human and AI agents, and how large groups of people can sense new information, and take action collectively. His work is relevant to improving deep reinforcement learning algorithms, improving financial trading, rewiring collaboration networks, crowd-sourcing, voting, and innovation. He grew up in Mauritius, and greatly enjoys cooking and running. PhD Thesis abstract: I first investigate the cognitive limits humans suffer from - and the inductive biases we employ to overcome these limits - and how to build systems to improve our learning and decision-making in the face of these biases and limits. Secondly, I show how some of these inductive biases can be applied to enable modern machine learning to be more interpretable, robust and sample-efficient. Humans have been shown to employ various inductive biases in how they learn: we show preference for certain approaches to improve sample complexity and minimize communication cost. Previous work has documented in detail when such inductive biases fail. However, when learning is distributed or when data is sparse, such inductive biases can lead to improved learning. In a first study, I observe that humans in a large online experiment make strong assumptions on the distributional properties of the data, and that they prefer to learn from other people's compressed estimates rather than from actual data. I then create novel metric - a measure of social learning - to identify individuals who improve the accuracy of the group. In another domain where hundreds of thousands of traders have to decide which other traders to learn strategies from to make their trades, I observe strong Dunbar cognitive limits. Specifically, I observe a tradeoff between frequency of data update and number of people to follow, and test recommender bots that improve trader performance without increasing cognitive load. Although inductive biases can lead to sub-optimal behavior, there is an increasing movement to imbue more inductive biases into modern machine learning to make it more sample efficient, robust and interpretable. I show, for example, that the inductive bias of humans to use certain network topologies to perform decentralized optimization can be applied to modern deep learning: I observe strong improvements in large-scale deep reinforcement learning by forcing gradient updates to be communicated over synthetic optimized random graphs. In another project, I create a novel 'relational unit' and demonstrate that it strongly outperforms other state-of-the-art reinforcement approaches (MLP, multi-head attention, partitioned DQN) due to the imbued relational inductive bias of our model. I also observe that the relations learned are clearly interpretable. Such work not only provides tangible contributions to making both human learning more accurate in more realistic settings, and helping machine learning be more sample-efficient on harder problems, but also provides suggestions as to how to make human-AI collaboration more effective.

Jack Clark (OpenAI)
Sean McGregor (Syntiant and XPRIZE)
Sean McGregor

Sean McGregor is a machine learning PhD, founder of the Responsible AI Collaborative, lead technical consultant for the IBM Watson AI XPRIZE, and consulting researcher with the neural accelerator startup Syntiant. His current focus is the development of the AI Incident Database as an index of harms or near harms experienced in the real world, which builds on his experience in AI safety and interpretability for deep and reinforcement learning as applied to wildfire suppression policy, speech, and heliophysics. Outside his paid work, Sean's open source development work has earned media attention in the Atlantic, Der Spiegel, Mashable, Wired, Venture Beat, Vice, and O'Reilly while his technical publications have appeared in a variety of machine learning, HCI, ethics, and application-centered proceedings.

Margaux Luck (MILA)
Jonathan Penn (University of Cambridge)

Author, technologist, and historian. Interested in the societal implications of AI over time. PhD candidate in the History and Philosophy of Science Department at the University of Cambridge. Studies the history of AI in the twentieth century. Currently a visiting scholar at MIT. Prior Google Technology Policy Fellow, Assembly Fellow at the MIT Media Lab/Berkman Kline Centre. Holds degrees from the University of Cambridge and McGill University.

Tristan Sylvain (MILA)
Geneviève Boucher (IRIC)
Sydney Swaine-Simon (District 3)
Girmaw Abebe Tadesse (University of Oxford)
Myriam Côté (Mila)
Anna Bethke (Intel)
Yoshua Bengio (Mila)

Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.

More from the Same Authors