Timezone: »

 
Workshop
Minding the Gap: Between Fairness and Ethics
Igor Rubinov · Risi Kondor · Jack Poulson · Manfred K. Warmuth · Emanuel Moss · Alexa Hagerty

Fri Dec 13 08:00 AM -- 06:00 PM (PST) @ East Meeting Rooms 8 + 15
Event URL: https://mindingthegap.github.io/ »

When researchers and practitioners, as well as policy makers and the public, discuss the impacts of deep learning systems, they draw upon multiple conceptual frames that do not sit easily beside each other. Questions of algorithmic fairness arise from a set of concerns that are similar, but not identical, to those that circulate around AI safety, which in turn overlap with, but are distinct from, the questions that motivate work on AI ethics, and so on. Robust bodies of research on privacy, security, transparency, accountability, interpretability, explainability, and opacity are also incorporated into each of these frames and conversations in variable ways. These frames reveal gaps that persist across both highly technical and socially embedded approaches, and yet collaboration across these gaps has proven challenging.

Fairness, Ethics, and Safety in AI each draw upon different disciplinary prerogatives, variously centering applied mathematics, analytic philosophy, behavioral sciences, legal studies, and the social sciences in ways that make conversation between these frames fraught with misunderstandings. These misunderstandings arise from a high degree of linguistic slippage between different frames, and reveal the epistemic fractures that undermine valuable synergy and productive collaboration. This workshop focuses on ways to translate between these ongoing efforts and bring them into necessary conversation in order to understand the profound impacts of algorithmic systems in society.

Fri 8:00 a.m. - 8:15 a.m. [iCal]
Opening Remarks (Talk)
Jack Poulson, Manfred K. Warmuth
Fri 8:15 a.m. - 8:45 a.m. [iCal]
Invited Talk (Talk)
Yoshua Bengio
Fri 8:45 a.m. - 9:45 a.m. [iCal]

The stakes of AI certainly alter how we relate to each other as humans - how we know what we know about reality, how we communicate, how we work and earn money, and about how we think of ourselves as human. But in grappling with these changing relations, three fairly concrete approaches have dominated the conversation: ethics, fairness, and safety. These approaches come from very different academic backgrounds, draw attention to very different aspects of AI, and imagine very different problems and solutions as relevant, leading us to ask: • What are the commonalities and differences between ethics, fairness, and safety as approaches to addressing the challenges of AI? • How do these approaches imagine different problems and solutions for the challenges posed by AI? • How can these approaches work together, or are there some areas where they are mutually incompatible?

Yoshua Bengio, Roel Dobbe, Madeleine Elish, Joshua Kroll, Jacob Metcalf, Jack Poulson
Fri 9:45 a.m. - 10:00 a.m. [iCal]
Spectrogram (Activity)
Emanuel Moss
Fri 10:00 a.m. - 10:30 a.m. [iCal]
Coffee Break (Break)
Fri 10:30 a.m. - 11:30 a.m. [iCal]

Algorithmic systems are being widely used in key social institutions and while they promise radical improvements in fields from public health to energy allocation, they also raises troubling issues of bias, discrimination, and “automated inequality.” They also present irresolvable challenges related to the dual-use nature of these technologies, secondary effects that are difficult to anticipate, and alter power relations between individuals, companies, and governments. • How should we delimit the scope of AI impacts? What can properly be considered an AI impact, as opposed to an impact arising from some other cause? • How do we detect and document the social impacts of AI? • What tools, processes, and institutions ought to be involved in addressing these questions?

Fitzroy Christian, Alexa Hagerty, Fabian Rogers, Friederike Schuur, Jacob Snow, Madeleine Elish
Fri 11:30 a.m. - 12:30 p.m. [iCal]

While there is a great deal of AI research happening in academic settings, much of that work is operationalized within corporate contexts. Some companies serve as vendors, selling AI systems to government entities, some sell to other companies, some sell directly to end-users, and yet others sell to any combination of the above. • What set of responsibilities does the AI industry have w.r.t. AI impacts? • How do those responsibilities shift depending on a B2B, B2G, B2C business model? • What responsibilities does government have to society, with respect to AI impacts arising from industry? • What role does civil society organizations have to play in this conversation?

Been Kim, Liz O'Sullivan, Friederike Schuur, Andrew Smart, Jacob Metcalf
Fri 12:30 p.m. - 2:00 p.m. [iCal]
Lunch (Lunch Break)
Fri 2:00 p.m. - 2:45 p.m. [iCal]
A Conversation with Meredith Whittaker (Interview)
Mona Sloane, Meredith Whittaker
Fri 2:45 p.m. - 3:45 p.m. [iCal]

The risks and benefits of AI are unevenly distributed within societies and across the globe. Governance regimes are drastically different in various regions of the world, as are the political and ethical implications of AI technologies. • How do we better understand how AI technologies operate around the world and the range of risks they carry for different societies? • Are there global claims about the implications of AI that can apply everywhere around the globe? If so, what are they? • What can we learn from AI’s impacts on labor, environment, public health and agriculture in diverse settings?

Eirini Malliaraki, Jack Poulson, Vinod Prabhakaran, Mona Sloane, Alexa Hagerty
Fri 3:45 p.m. - 4:30 p.m. [iCal]
Coffee Break (Break)
Fri 4:30 p.m. - 5:45 p.m. [iCal]

With the recognition that there are no fully sufficient steps that can be taken to addressing all AI impacts, there are concrete things that ought to be done, ranging across technical, socio-technical, and legal or regulatory possibilities. • What are the technical, social, and/or regulatory solutions that are necessary to address the riskiest aspects of AI? • What are key approaches to minimize the risks of AI technologies?

Fitzroy Christian, Lily Hu, Risi Kondor, Brandeis Marshall, Fabian Rogers, Friederike Schuur, Emanuel Moss

Author Information

Igor Rubinov (Dovetail Labs)
Risi Kondor (U. Chicago)
Jack Poulson (Tech Inquiry)
Manfred K. Warmuth (Google Brain)
Emanuel Moss (CUNY Graduate Center | Data & Society)
Alexa Hagerty (University of Cambridge; Dovetail Labs)

More from the Same Authors