Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 12 05:30 AM -- 03:00 PM (PST)
Navigating the Broader Impacts of AI Research
Carolyn Ashurst · Rosie Campbell · Deborah Raji · Solon Barocas · Stuart Russell





Workshop Home Page

Following growing concerns with both harmful research impact and research conduct in computer science, including concerns with research published at NeurIPS, this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions.

These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research, and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice. The changes have been met with both praise and criticism some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.

This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research practices, there remains many open questions about how to ensure effective ethical oversight. This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year devoted to normative issues in AI and builds on others from years past, but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.

Welcome
Morning keynote (Keynote)
Ethical oversight in the peer review process (Discussion panel)
Morning break (Break)
Harms from AI research (Discussion panel)
How should researchers engage with controversial applications of AI? (Discussion panel)
Lunch and watch lightning talks (in parallel) from workshop submissions (Break)
Discussions with authors of submitted papers (Breakouts)
Responsible publication: NLP case study (Discussion panel)
Afternoon break (Break)
Strategies for anticipating and mitigating risks (Discussion panel)
The roles of different parts of the research ecosystem in navigating broader impacts (Discussion panel)
Closing remarks
Auditing Government AI: Assessing ethical vulnerability of machine learning (Lightning talk (5-7 mins))
AI in the “Real World”: Examining the Impact of AI Deployment in Low-Resource Contexts (Lightning talk (5-7 mins))
An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process (Lightning talk (5-7 mins))
Non-Portability of Algorithmic Fairness in India (Lightning talk (5-7 mins))
Anticipatory Ethics and the Role of Uncertainty (Lightning talk (5-7 mins))
Like a Researcher Stating Broader Impact For the Very First Time (Lightning talk (5-7 mins))
Training Ethically Responsible AI Researchers: a Case Study (Lightning talk (5-7 mins))
An Ethical Highlighter for People-Centric Dataset Creation (Lightning talk (5-7 mins))
Nose to Glass: Looking In to Get Beyond (Lightning talk (5-7 mins))
Ethical Testing in the Real World: Recommendations for Physical Testing of Adversarial Machine Learning Attacks (Lightning talk (5-7 mins))
The Managerial Effects of Algorithmic Fairness Activism (Lightning talk (5-7 mins))
Ideal theory in AI ethics (Lightning talk (5-7 mins))
Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics (Lightning talk (5-7 mins))
Overcoming Failures of Imagination in AI Infused System Development and Deployment (Lightning talk (5-7 mins))