Timezone: »

 
Workshop
Workshop on Ethical, Social and Governance Issues in AI
Chloe Bakalar · Sarah Bird · Tiberio Caetano · Edward W Felten · Dario Garcia · Isabel Kloumann · Finnian Lattimore · Sendhil Mullainathan · D. Sculley

Fri Dec 07 05:00 AM -- 03:30 PM (PST) @ Room 516 AB
Event URL: https://sites.google.com/view/aiethicsworkshop »

Abstract

Ethics is the philosophy of human conduct: It addresses the question “how should we act?” Throughout most of history the repertoire of actions available to us was limited and their consequences constrained in scope and impact through dispersed power structures and slow trade. Today, in our globalised and networked world, a decision can affect billions of people instantaneously and have tremendously complex repercussions. Machine learning algorithms are replacing humans in making many of the decisions that affect our everyday lives. How can we decide how machine learning algorithms and their designers should act? What is the ethics of today and what will it be in the future?

In this one day workshop we will explore the interaction of AI, society, and ethics through three general themes.

Advancing and Connecting Theory: How do different fairness metrics relate to one another? What are the trade-offs between them? How do fairness, accountability, transparency, interpretability and causality relate to ethical decision making? What principles can we use to guide us in selecting fairness metrics within a given context? Can we connect these principles back to ethics in philosophy? Are these principles still relevant today?

Tools and Applications: Real-world examples of how ethical considerations are affecting the design of ML systems and pipelines. Applications of algorithmic fairness, transparency or interpretability to produce better outcomes. Tools that aid identifying and or alleviating issues such as bias, discrimination, filter bubbles, feedback loops etc. and enable actionable exploration of the resulting trade-offs.

Regulation: With the GDPR coming into force in May 2018 it is the perfect time to examine how regulation can help (or hinder) our efforts to deploy AI for the benefit of society. How are companies and organisations responding to the GDPR? What aspects are working and what are the challenges? How can regulatory or legal frameworks be designed to continue to encourage innovation, so society as a whole can benefit from AI, whilst still providing protection against its harms.

This workshop is designed to be focused on some of the larger ethical issues related to AI and can be seen as a complement to the FATML proposal, which is focused more on fairness, transparency and accountability. We would be happy to link or cluster the workshops together, but we (us and the FATML organizers) think that there is more than 2 day worth of material that the community needs to discuss in the area of AI and ethics, so it would be great to have both workshops if possible.

Fri 5:20 a.m. - 5:30 a.m. [iCal]
Welcome and organisers comments (Introduction)
Chloé Bakalar, Finnian Lattimore, Sarah Bird, Sendhil Mullainathan
Fri 5:30 a.m. - 6:00 a.m. [iCal]

Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates. In particular, we study how these properties operate when the goal is to rank-order a set of applicants by some criterion of interest, and then to select the top-ranking applicants. Among other results, we show that imposing a constraint to favor "simple" rules -- for example, to promote interpretability -- can have consequences for the equity of the ranking toward disadvantaged groups.

Jon Kleinberg
Fri 6:00 a.m. - 6:30 a.m. [iCal]

In machine learning often a tradeoff must be made between accuracy and intelligibility. This tradeoff sometimes limits the accuracy of models that can be safely deployed in mission-critical applications such as healthcare and criminal justice where being able to understand, validate, edit, and ultimately trust a learned model is important. In this talk I’ll present a case study where intelligibility is critical to uncover surprising patterns in the data that would have made deploying a black-box model dangerous. I’ll also show how distillation with intelligible models can be used to detect bias inside black-box models.

Rich Caruana
Fri 6:30 a.m. - 7:00 a.m. [iCal]

Recently, a number of technical solutions have been proposed for tackling algorithmic unfairness and discrimination. I will talk about some of the connections between these proposals and to the long-established economic theories of fairness and distributive justice. In particular, I will overview the axiomatic characterization of measures of (income) inequality, and present them as a unifying framework for quantifying individual- and group-level unfairness; I will propose the use of cardinal social welfare functions as an an effective method for bounding individual-level inequality; and last but not least, I will cast existing notions of algorithmic (un)fairness as special cases of economic models of equality of opportunity---through this lens, I hope to offer a better understanding of the moral assumptions underlying technical definitions of fairness.

Fri 7:00 a.m. - 7:20 a.m. [iCal]
Poster Spotlights 1 (Spotlight talks)
Fri 7:20 a.m. - 8:30 a.m. [iCal]
Posters 1 (Poster Session)
Wei Wei, Flavio Calmon, Travis Dick, Leilani Gilpin, Maroussia Lévesque, Malek Ben Salem, Michael Wang, Jack Fitzsimons, Dimitri Semenovich, Linda Gu, Nathaniel Fruchter
Fri 8:30 a.m. - 8:50 a.m. [iCal]

We introduce the BriarPatch, a pixel-space intervention that obscures sensitive attributes from representations encoded in pre-trained classifiers. The patches encourage internal model representations not to encode sensitive information, which has the effect of pushing downstream predictors towards exhibiting demographic parity with respect to the sensitive information. The net result is that these BriarPatches provide an intervention mechanism available at user level, and complements prior research on fair representations that were previously only applicable by model developers and ML experts.

Fri 8:50 a.m. - 9:10 a.m. [iCal]

The concept of individual fairness advocates similar treatment of similar individuals to ensure equality in treatment \cite{Dwork2012}. In this paper, we extend this notion to account for the time at which a decision is made, in settings where there exists a notion of "conduciveness" of decisions as perceived by individuals. We introduce two definitions: (i) fairness-across-time and (ii) fairness-in-hindsight. In the former, treatments of individuals are required to be individually fair relative to the past as well as future, while in the latter we only require individual fairness relative to the past. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model: one can achieve a vanishing asymptotic loss in long-run average utility relative to the full-information optimum under the fairness-in-hindsight constraint, whereas this asymptotic loss can be bounded away from zero under the fairness-across-time constraint.

Fri 9:10 a.m. - 9:30 a.m. [iCal]

There is a disconnect between explanatory artificial intelligence (XAI) methods for deep neural networks and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don’t explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we explore the types of questions that explanatory deep neural network (DNN) systems can answer and discuss challenges inherent in building explanatory systems that provide outside explanations of systems for societal requirements and benefit.

Fri 9:30 a.m. - 11:00 a.m. [iCal]
Lunch
Fri 11:00 a.m. - 11:30 a.m. [iCal]

The potential for machine learning systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent research has focused on the development of algorithmic tools to detect and mitigate such unfairness. However, if these tools are to have a positive impact on industry practice, it is crucial that their design be informed by an understanding of industry teams’ actual needs. Through semi-structured interviews with 35 machine learning practitioners, spanning 19 teams and 10 companies, and an anonymous survey of 267 practitioners, we conducted the first systematic investigation of industry teams' challenges and needs for support in developing fairer machine learning systems. I will describe this work and summarize areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the academic literature. Based on these findings, I will highlight directions for future research that will better address practitioners' needs.

Hanna Wallach
Fri 11:30 a.m. - 12:00 p.m. [iCal]

Addressing a rapidly growing public awareness about bias and fairness issues in algorithmic decision-making systems (ADS), the tech industry is now championing a set of tools to assess and mitigate these. Such tools, broadly categorized as algorithmic fairness definitions, metrics and mitigation strategies find their roots in recent research from the community on Fairness, Accountability and Transparency in Machine Learning (FAT/ML), which started convening in 2014 at popular machine learning conferences, and has since been succeeded by a broader conference on Fairness, Accountability and Transparency in Sociotechnical Systems (FAT*). Whereas there is value in this research to assist diagnosis and informed debate about the inherent trade-offs and ethical choices that come with data-driven approaches to policy and decision-making, marketing poorly validated tools as quick fix strategies to eliminate bias is problematic and threatens to deepen an already growing sense of distrust among companies and institutions procuring data analysis software and enterprise platforms. This trend is coinciding with efforts by the IEEE and others to develop certification and marking processes that "advance transparency, accountability and reduction in algorithmic bias in Autonomous and Intelligent Systems". These efforts combined suggest a checkbox recipe for improving accountability and resolving the many ethical issues that have surfaced in the rapid deployment of ADS. In this talk, we nuance this timely debate by pointing at the inherent technical limitations of fairness metrics as a go-to tool for fixing bias. We discuss earlier attempts of certification to clarify pitfalls. We refer to developments in governments adopting ADS systems and how a lack of accountability and existing power structures are leading to new forms of harm that question the very efficacy of ADS. We end with discussing productive uses of diagnostic tools and the concept of Algorithmic Impact Assessment as a new framework for identifying the value, limitations and challenges of integrating algorithms in real world contexts.

Fri 12:00 p.m. - 12:20 p.m. [iCal]
Poster Spotlights 2 (Spotlight talks)
Fri 12:20 p.m. - 1:30 p.m. [iCal]
Posters 2 (Poster session)
Fri 1:30 p.m. - 2:00 p.m. [iCal]

Societies often rely on human experts to take a wide variety of decisions affecting their members, from jail-or-release decisions taken by judges and stop-and-frisk decisions taken by police officers to accept-or-reject decisions taken by academics. In this context, each decision is taken by an expert who is typically chosen uniformly at random from a pool of experts. However, these decisions may be imperfect due to limited experience, implicit biases, or faulty probabilistic reasoning. Can we improve the accuracy and fairness of the overall decision making process by optimizing the assignment between experts and decisions?

In this talk, we address the above problem from the perspective of sequential decision making and show that, for different fairness notions from the literature, it reduces to a sequence of (constrained) weighted bipartite matchings, which can be solved efficiently using algorithms with approximation guarantees. Moreover, these algorithms also benefit from posterior sampling to actively trade off exploitation---selecting expert assignments which lead to accurate and fair decisions---and exploration---selecting expert assignments to learn about the experts' preferences and biases. We demonstrate the effectiveness of our algorithms on both synthetic and real-world data and show that they can significantly improve both the accuracy and fairness of the decisions taken by pools of experts.

Manuel Rodriguez
Fri 2:00 p.m. - 2:45 p.m. [iCal]
Discussion Panel

Author Information

Chloe Bakalar (Princeton University)
Sarah Bird (Facebook AI Research)

Sarah leads research and emerging technology strategy for Azure AI. Sarah works to accelerate the adoption and impact of AI by bringing together the latest innovations research with the best of open source and product expertise to create new tools and technologies. Sarah is currently leading the development of responsible AI tools in Azure Machine Learning. She is also an active member of the Microsoft AETHER committee, where she works to develop and drive company-wide adoption of responsible AI principles, best practices, and technologies. Sarah was one of the founding researchers in the Microsoft FATE research group and prior to joining Microsoft worked on AI fairness in Facebook. Sarah is active contributor to the open source ecosystem, she co-founded ONNX, an open source standard for machine learning models and was a leader in the Pytorch 1.0 project. She was an early member of the machine learning systems research community and has been active in growing and forming the community. She co-founded the SysML research conference and the Learning Systems workshops. She has a Ph.D. in computer science from UC Berkeley advised by Dave Patterson, Krste Asanovic, and Burton Smith.

Tiberio Caetano (Gradient Institute)
Edward W Felten (Princeton University)

Edward W. Felten is the Robert E. Kahn Professor of Computer Science and Public Affairs at Princeton University, and the founding Director of Princeton's Center for Information Technology Policy. He is a member of the United States Privacy and Civil Liberties Oversight Board. In 2015-2017 he served in the White House as Deputy U.S. Chief Technology Officer. In 2011-12 he served as the first Chief Technologist at the U.S. Federal Trade Commission. His research interests include computer security and privacy, and technology law and policy. He has published more than 150 papers in the research literature, and three books. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, and is a Fellow of the ACM.

Dario Garcia (Facebook)
Isabel Kloumann (Facebook)
Finnian Lattimore (The Gradient Institute)
Sendhil Mullainathan (University of Chicago)
D. Sculley (Google Research)

More from the Same Authors