Timezone: »
The Queer in AI workshop at NeurIPS asks its participants to question the status quo of machine learning research and applications in society, in a world ravaged by queerphobia, heteropatriarchy, corporate hegemony, racial disparity and global economic inequality. The Queer in AI membership survey shows that nearly 70% of queer scientists are not publicly out and many have faced discrimination or even violence due to their existence. We call on our community to face these challenges head-on, to advocate for and build a future where technological progress empowers marginalized people and does not ossify the status quo of the past and present.
In the last months, large language models have made impressive progress on well-established AI benchmarks, to a point where some believe that the continuous increase in the size of language models can bring about AGI. This vision ignores the real and well-documented harms that this paradigm presents for marginalized communities and concentrates power in the hands of corporations and other actors who can afford to collect data and scale systems without considering the lives of those impacted. It also silences those who work diligently to explore the weaknesses, problems and short-coming of these models.
As such, it is imperative to bring this workshop to NeurIPS and invite all participants of the conference to reflect, discuss and build a future of equitable AI for people of all identities and backgrounds, beyond simplistic questions of the raw scale of computational resources and data. In this year’s workshop, which will be held both virtually and in-person, we want to focus in particular on the ongoing tension of ethical AI research at large corporations whose profit motives are intrinsically linked to AI, and on the impact of AI technology on people with intersectional identities, such as queer neurodiverse people and queer people of color.
Mon 7:00 a.m. - 7:15 a.m.
|
Introduction: Queer in AI
(
Introduction
)
SlidesLive Video » |
🔗 |
Mon 7:15 a.m. - 7:45 a.m.
|
The possibilities and limits of intersectionality
(
Oral Presentation
)
SlidesLive Video » Inspired by the Combahee River Collective Statement and the pioneering essays of Prof. Kimberlé Crenshaw, this talk suggests some tentative ways to imagine a practice of intersectionality in India. Using Dr. Ambedkar’s remarkable insight into the relation between caste and sexuality, this talk will do a critical reading of Navtej Johar and NALSA judgments by the Supreme Court of India that recognised the equal rights of LGB and T persons. Instead of understanding intersectionality as a catch all solution to political problems, the talk suggests that intersectionality sometimes leaves us with an aporia. |
🔗 |
Mon 7:45 a.m. - 8:15 a.m.
|
The Queer Gaze
(
Oral Presentation
)
SlidesLive Video » Post-2019, the reading down of section 377 by the Supreme court of India various production houses and OTT decided that it was time to have some queer content. But more often than not this was not really about empowering the community but getting a tick mark on having something to do with inclusion. What followed was a handful of films with queer narratives, mostly directed by cis men and as they put it ""taking baby steps"" to understand the community. What was missing was the queer community being empowered to tell their own stories, or empowering queer artists and technicians being hired to represent themselves. |
🔗 |
Mon 8:30 a.m. - 9:30 a.m.
|
Faculty and Queerness
(
Discussion Panel
)
SlidesLive Video » This panel brings together a diverse group of queer faculty to discuss their experiences in the workplace. From navigating academia to being out in their department, panelists will share experiences and lessons on the same. The panel will also include discussions on inculcating inclusivity in the classroom. |
🔗 |
Mon 9:30 a.m. - 10:15 a.m.
|
Sponsor Booth Coffee Break
(
Sponsorships
)
|
🔗 |
Mon 10:15 a.m. - 11:30 a.m.
|
Lunch
|
🔗 |
Mon 11:30 a.m. - 12:00 p.m.
|
Sex and Gender in Computer Graphics Research Literature
(
Oral Presentation
)
SlidesLive Video » We survey the treatment of sex and gender in the Computer Graphics research literature from an algorithmic fairness perspective. We conclude current trends on the use of gender in our research community are scientifically incorrect and constitute a form of algorithmic bias with potential harmful effects. We propose ways of addressing these trends as technical limitations. |
🔗 |
Mon 12:00 p.m. - 12:30 p.m.
|
Name Change Policies: A Brief (Personal) Tour
(
Oral Presentation
)
SlidesLive Video » This talk will cover the current state of name changes on previously published papers, an issue that’s particularly important for trans authors, largely through the lens of my own name change. I’ll talk about what publishers allow, how we got there, and issues with third-party tools like Google Scholar. I’ll also examine how effective all this is at getting people to actually cite you by the right name. |
🔗 |
Mon 12:45 p.m. - 1:45 p.m.
|
Immigration and Queerness
(
Discussion Panel
)
Queer individuals often have unique relations to their national identity based on societal norms and political tolerance in the country. These tensions can even motivate emigration, which then results in new challenges. This panel will discuss the difficulties faced by queer individuals when moving to new countries as well as challenges faced when living in a multicultural environment. |
🔗 |
Mon 1:45 p.m. - 2:30 p.m.
|
Sponsor Networking Session
(
Sponsorships
)
SlidesLive Video » |
🔗 |
Mon 2:30 p.m. - 4:00 p.m.
|
Affinity Joint Poster Session
(
Poster Session
)
|
🔗 |
-
|
Virtual Affinity Poster Session
(
Topia Poster Session
)
The Virtual Affinity Poster Session will be held on Monday 5 Dec (or Tuesday 6 Dec for far eastern timezones, check the link for your time). |
🔗 |
-
|
Multi-objective Bayesian Optimization with Heuristic Objectives for Biomedical and Molecular Data Analysis Workflows
(
Poster
)
link »
Many practical applications require optimization of multiple, computationally expensive, and possibly competing objectives that are well-suited for multi-objective Bayesian optimization (MOBO) procedures. However, for many types of biomedical data, measures of data analysis workflow success are often heuristic and therefore it is not known a priori which objectives are useful. Thus, MOBO methods that return the full Pareto front may be suboptimal in these cases. Here we propose a novel MOBO method that adaptively updates the scalarization function using properties of the posterior of a multi-output Gaussian process surrogate function. This approach selects useful objectives based on a flexible set of desirable criteria, allowing the functional form of each objective to guide optimization. We demonstrate the qualitative behaviour of our method on toy data and perform proof-of-concept analyses of single-cell RNA sequencing and highly multiplexed imaging datasets. |
Alina Selega · Kieran Campbell 🔗 |
-
|
Digging into the (Internet) Archive: Examining the NSFW Model Behind the 2018 Tumblr Purge
(
Poster
)
link »
In December 2018, Tumblr took down massive amounts of LGBTQ content from its platform. Motivated in part by increasing pressures from financial institutions and a newly passed law -- SESTA / FOSTA, which made companies liable for sex trafficking online -- Tumblr implemented a strict "not safe for work" or NSFW model, whose false positives included images of fully clothed women, handmade and digital art, and other innocuous objects, such as vases. The Archive Team, in conjunction with the Internet Archive, jumped into high gear and began to scrape self-tagged NSFW blogs in the 2 weeks between Tumblr's announcement of its new policy and its algorithmic operationalization. At the time, Tumblr was considered a safe haven for the LGBTQ community and in 2013 Yahoo! bought Tumblr for 1.1 billion. In the aftermath of the so-called "Tumblr purge," Tumblr lost its main user base and, as of 2019, was valued at 3 million. This paper digs into a slice of the 90 TB of data saved by the Archive Team. This is a unique opportunity to peek under the hood of Yahoo's opennsfw model, which experts believe was used in the Tumblr purge, and examine the distribution of false positives on the Archive Team dataset. Specifically, we run the opennsfw model on our dataset and use the t-SNE algorithm to project the similarities across images on 3D space. We also labeled our 100,000 image dataset by LGBTQ community members on a Likert scale on the following dimensions: 1) does the image depict nudity or sex? 2) does the image contain or refer to LGBTQ relationships, themes, or subjects? This labeled data is only used to evaluate bias in the porn classifier and to help measure the impact of the Tumblr purge on the LGBTQ community. |
Renata Barreto · Claudia Von Vacano · Aaron Culich 🔗 |
-
|
Que(e)rying the Use of Artificial Intelligence for Infectious Disease Surveillance: How to Ensure New Tools Do Not Perpetuate a Long History of Health Disparities Affecting LGBTQI+ Populations
(
Poster
)
link »
From HIV/AIDS to COVID-19 to Monkeypox, outbreaks have disproportionately targeted LGBTQI+ communities. Evidence suggests that the likelihood of pandemics will only continue to increase, a possibility which highlights our need for better tools. By enabling the robust, efficient, and timely analysis of huge amounts of data, artificial intelligence has the potential to help decision-makers better respond to, manage, and even prevent infectious disease outbreaks (Malik et al., 2021; Wong, Zhou, and Zhang, 2019). This could, ultimately, reduce harm, disruption, and the loss of human life. However, AI also has a history of intensifying and perpetuating major inequities—including anti-queer bias. Considering the aforementioned disproportionate affects and epidemiology’s oppressive roots, we must be particularly thoughtful as we apply these algorithmic systems for infectious disease surveillance. Drawing from queer theory, critical race theory, and critical feminist studies, this project has adopted an Intersectional and reparative approach to studying the use of AI for such purposes. In doing so, it engages the audience in a dynamic give and take to explore how we can achieve algorithmic justice for LGBTQI+ people in the face of these emerging A-enabled tools. |
Elise Racine 🔗 |
-
|
Making Intelligence: Ethics, IQ, and ML Benchmarks
(
Poster
)
link »
The ML community recognizes the importance of anticipating and mitigating the potential negative impacts of benchmark research. In this position paper, we argue that more attention needs to be paid to areas of ethical risk that lie at the technical and scientific core of ML benchmarks. We identify overlooked structural similarities between human IQ and ML benchmarks. These share similarities in setting standards for describing, evaluating and comparing performance on tasks relevant to intelligence. Drawing on prior research on IQ benchmarks from feminist philosophy of science, we argue that values need to be considered when creating ML benchmarks and datasets, and that it is not possible to avoid this choice by creating benchmarks that are value-neutral. Finally, we outline practical recommendations for benchmark research ethics and ethics review. |
Leif Hancox-Li · Borhane Blili-Hamelin 🔗 |
-
|
Evolving Label Usage within Generation Z when Self-Describing Sexual Orientation
(
Poster
)
link »
Evaluating change in ranked term importance in a growing corpus is a powerful tool for understanding changes in vocabulary usage. In this paper, we analyze a corpus of free-response answers where 33,993 LGBTQ Generation Z respondents from age 13 to 24 in the United States are asked to self-describe their sexual orientation. We observe that certain labels, such as bisexual, pansexual, and lesbian, remain equally important across age groups. The importance of other labels, such as homosexual, demisexual, and omnisexual, evolve across age groups. Although Generation Z is often stereotyped as homogenous, we observe noticeably different label usage when self-describing sexual orientation within it. We urge that interested parties must routinely survey the most important sexual orientation labels to their target audience and refresh their materials (such as demographic surveys) to reflect the constantly evolving LGBTQ community and create an inclusive environment. |
Wilson Lee · J Hobbs 🔗 |
Author Information
Sarthak Arora (UC Berkeley)
Jaidev Shriram (University of California, San Diego)
Evan Dong (Brown University)
Divija Nagaraju (Carnegie Mellon University)
Kruno Lehman (ETH Zürich / QueerInAI)
Yanan Long (University of Chicago)
Nenad Tomasev (DeepMind)
Ashwin S (IIIT Hyderabad)
Ashwin is a Research Associate at Precog and Language Technologies Research Center, IIIT Hyderabad. They use both qualitative and computational methods to understand phenomenon on social media platforms and computer-mediated communication technologies, with the goal to make these systems more inclusive and less harmful for the margins of society.
Hang Yuan (University of Oxford)
Ruchira Ray (University of Texas at Austin)
Claas Voelcker (University of Toronto, Queer in AI)
More from the Same Authors
-
2020 : Training Ethically Responsible AI Researchers: a Case Study »
Hang Yuan · Claudia Vanea · Federica Lucivero · Nina Hallowell -
2022 : Caused by Race or Caused by Racism? Limitations in Envisioning Fair Counterfactuals »
Evan Dong -
2022 : Active Acquisition for Multimodal Temporal Data: A Challenging Decision-Making Task »
Jannik Kossen · Cătălina Cangea · Eszter Vértes · Andrew Jaegle · Viorica Patraucean · Ira Ktena · Nenad Tomasev · Danielle Belgrave -
2023 Workshop: 6th Robot Learning Workshop: Pretraining, Fine-Tuning, and Generalization with Large Scale Models »
Dhruv Shah · Paula Wulkop · Claas Voelcker · Georgia Chalvatzaki · Alex Bewley · Hamidreza Kasaei · Ransalu Senanayake · Julien PEREZ · Jonathan Tompson -
2023 Affinity Workshop: Queer in AI »
· Sharvani Jha · Ruchira Ray · Sarthak Arora -
2022 : Advancing the participatory approach to AI in Mental Health »
Wilson Lee · Munmun De Choudhury · Morgan Scheuerman · Julia Hamer-Hunt · Dan Joyce · Nenad Tomasev · Kevin McKee · Shakir Mohamed · Danielle Belgrave · Christopher Burr -
2022 Workshop: Empowering Communities: A Participatory Approach to AI for Mental Health »
Andrey Kormilitzin · Dan Joyce · Nenad Tomasev · Kevin McKee -
2022 : Opening remarks and welcome »
Andrey Kormilitzin · Dan Joyce · Nenad Tomasev · Kevin McKee -
2021 Social: Queer in AI »
Claas Voelcker -
2021 Affinity Workshop: Queer in AI Workshop 2 »
Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Juan Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray -
2021 Affinity Workshop: Queer in AI Workshop 1 »
Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Juan Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray -
2019 : Implementing Responsible AI »
Brian Green · Wendell Wallach · Patrick Lin · Nenad Tomasev · Jingying Yang · Libby Kinsey -
2019 : AI in Healthcare: Working Towards Positive Clinical Impact »
Nenad Tomasev -
2019 : Alan Karthikesalingam & Nenad Tomasev Talk »
Nenad Tomasev