`

Timezone: »

 
Affinity Workshop
Queer in AI Workshop 1
Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Umut Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray

Tue Dec 07 05:00 AM -- 07:00 AM (PST) @ None
Event URL: https://sites.google.com/view/queer-in-ai/neurips-2021?authuser=0 »

Queer in AI’s demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reasons being a lack of queer community and role models. Over the past years, Queer in AI has worked towards these goals, yet we have observed that the voices of marginalized queer communities - especially transgender, non-binary folks and queer BIPOC folks - have been neglected. The purpose of this workshop is to highlight issues that these communities face by featuring talks and panel discussions on the inclusion of neuro-diverse people in our communities, the intersection of queer and animal rights, as well as worker rights issues around the world.

The main topics of the workshop will revolve around:
- the intersection of AI, queer identity and neurodiversity
- queer identity and labor rights and organization
- AI and animal rights
- queer identity and caste-based discrimination

Additionally, at Queer in AI’s socials at NeurIPS 2021, we will focus on creating a safe and inclusive casual networking and socializing space for LGBTQIA+ individuals involved with AI. There will also be additional social events, stay tuned for more details coming soon. Together, these components will create a community space where attendees can learn and grow from connecting with each other, bonding over shared experiences, and learning from each individual’s unique insights into AI, queerness, and beyond!

Tue 5:00 a.m. - 6:00 a.m.
Caste discrimination (Panel)
Tue 6:00 a.m. - 7:00 a.m.
Animal-centered AI (Panel)
-
(Poster) [ Visit Poster at Spot H3 in Virtual World ]

This work studies publications in the field of cognitive science and utilizes natural language processing (NLP) and graph theoretical techniques to connect the analysis of the papers' content (abstracts) to the context (citation, journals). We apply hierarchical topic modeling on the abstracts and community detection algorithms on the citation network, and measure content-context discrepancy to find academic fields that study similar topics but do not cite each other or publish in the same venues. These results show a promising, systemic framework to identify opportunities for scientific collaboration in highly interdisciplinary fields such as cognitive science and machine learning.

Harlin Lee
-
(Poster) [ Visit Poster at Spot H2 in Virtual World ]

AI, machine learning, and data science methods are already pervasive in our society and technology, affecting all of our lives in many subtle ways. Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place. Researchers, corporations, and governments have long and painful histories of excluding marginalized groups from technology development, deployment, and oversight. As a direct result of this exclusion, these technologies have long histories of being less useful or even harmful to minoritized groups. This infuriating history illustrates that industry cannot be trusted to self-regulate and why trust in commercial AI systems and development has been lost. We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative participatory design principles and strong, outside, and continual monitoring and testing. We additionally explain the importance of considering aspects of trustworthiness beyond just transparency, fairness, and accountability, specifically, to consider justice and shifting power to the people and disempowered as core values to any trustworthy AI system. Creating trustworthy AI starts by funding, supporting, and empowering groups like Queer in AI so the field of AI has the diversity and inclusion to credibly and effectively develop trustworthy AI. Through our years of work and advocacy, we have developed expert knowledge around questions of if and how gender, sexuality, and other aspects of identity should be used in AI systems and how harms along these lines should be mitigated. Based on this, we discuss a gendered approach to AI, and further propose a queer epistemology and analyze the benefits it can bring to AI.

William Agnew · Arjun Subramonian · Umut Pajaro Velasquez
-
(Poster) [ Visit Poster at Spot H1 in Virtual World ]

This submission is a poetry reflecting on classification and belongingness.

Safinah Arshad Ali
-
(Poster) [ Visit Poster at Spot H0 in Virtual World ]

A substantial majority of the world’s languages have no language technologies and NLP toolkits at all. With an increasingly inexorable reliance on technology and the web, depriving people access to technology in their native language is facilitating language death, and with it, loss of culture, traditions, linguistic information, and a diminishing richness of the human experience. This harsh reality marks the 21st century as a pivotal time for researchers and engineers in NLP. As per linguists, nearly half of the world's 7000 languages will be extinct before the end of this very century. But what if the advances in natural language processing and computational linguistics could help us change course? There has been a wide range of efforts by research groups on low-resource and resource-poor languages for the purposes of machine translation, and on endangered languages for the purposes of documentation and preservation. But despite numerous efforts in the field, there is a lack of a clear sense of direction and a unified front to tackle this problem. This paper hopes to unravel the diverse computational efforts being undertaken for low-resource, resource-poor and endangered language research, the different data resource creation and extraction techniques, and modern deep learning and statistical models being used specifically for this domain

Milind Agarwal
-
(Poster) [ Visit Poster at Spot G3 in Virtual World ]

I will present work previously accepted at IQCE 2021, which built on a presentation given at Trans Math Day 2020. In this work, I used mixed methods to model my first year on feminizing hormone replacement therapy. I draw heavily on Quantitative Ethnography, Analytic Autoethnography, and use a desire-based framework. Key to this work was relating (a) "time" as a thing experienced nonlinearly during a life transition and (b) other qualitative variables. The result is a confident telling of my first year of HRT, organized around a critical event that had a structural effect on the data: my doctor moving me to a three month dose so I could visit the pharmacist less often, in turn being able to live my damn life, escape unhealthy cycles of behavior, and finally start making progress towards my transition goals.

Mariah A. Knowles
-
(Poster) [ Visit Poster at Spot G2 in Virtual World ]

We present Mementorium, an interactive, branching narrative told in immersive, virtual reality (VR). The player uncovers the narrator’s memories of gender and sexuality-based marginalizations in STEM learning environments, moving from childhood to early adulthood. Mementorium’s design builds upon our previous designs and research on queer reorientations to computing and queer approaches to embodied learning in VR. When LGBTQ+ people’s exclusion is even acknowledged, approaches to addressing the problem often treat LGBTQ+ people as the problem: “We become a problem when we describe a problem” (Ahmed, 2017, p. 39). Framing LGBTQ+ people as the cause of their exclusion leads to solutions to entice and retain LGBTQ+ people in STEM. However, this fails to address issues that keep LGBTQ+ people from STEM fields. Mementorium aims to increase understanding of interpersonal and systemic factors contributing to LGBTQ+ exclusion from STEM learning and professions and encourage more expansive thinking and action in solidarity with LGBTQ+ people. Mementorium tells the story of a queer, nonbinary person interested in learning about technology but faces barriers to participation due to normative and oppressive ideas about gender and sexuality. Each of the memories that the player uncovers has three branching points in the narrative. First, the player uncovers the memory, revealing the harm caused by marginalization. Next, the player chooses their reaction to the situation that reorient players to the narrator’s experiences. Finally, the player chooses a future-oriented response to direct the narrator’s actions, offering choices for individual or group-oriented action or action on a larger scale of social change. We are researching Mementorium to see how players make sense of LGBTQ+ marginalizations as individual and systemic issues and how to reorient players toward counter-hegemonic actions that support marginalized people.

Dylan Paré

Author Information

Claas Voelcker (University of Toronto, Queer in AI)
Arjun Subramonian (University of California, Los Angeles)
Vishakha Agrawal (Dayananda Sagar College of Engineering)
Luca Soldaini (Amazon)
Pan Xu (Caltech)
Pranav A (Dayta AI)
William Agnew (University of Washington)
Umut Pajaro Velasquez (Queer in AI)
Yanan Long (University of Chicago)
Sharvani Jha (UCLA)
Ashwin S (IIIT Hyderabad)

Ashwin is a Research Associate at Precog and Language Technologies Research Center, IIIT Hyderabad. They use both qualitative and computational methods to understand phenomenon on social media platforms and computer-mediated communication technologies, with the goal to make these systems more inclusive and less harmful for the margins of society.

Mary Anne Smart (UC San Diego)
Patrick Feeney (Tufts University)
Ruchira Ray (SRM Institute of Science and Technology)

More from the Same Authors

  • 2021 : Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk Management »
    William Agnew · Arjun Subramonian · Umut Pajaro Velasquez
  • 2021 Affinity Workshop: Black in AI Workshop »
    Victor Silva · Ham Abdul-Rashid · Mírian Silva · Irene Nandutu · Michael Woldeyohannis · Foutse Yuehgoh · Salomey Osei · Patrick Feeney
  • 2021 Social: Queer in AI »
    Claas Voelcker
  • 2021 Affinity Workshop: Indigenous in AI Workshop »
    Mason Grimshaw · Michael Running Wolf · Patrick Feeney
  • 2021 Affinity Workshop: Queer in AI Workshop 2 »
    Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Umut Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray
  • 2021 Affinity Workshop: LatinX in AI (LXAI) Research @ NeurIPS 2021 »
    Maria Luisa Santiago · Andres Munoz · Laura Montoya · Karla Caballero · Isabel Metzger · Jose Gallego-Posada · Juan M Banda Banda · Gabriela Vega · Amanda Duarte · Patrick Feeney · Lourdes Ramírez Cerna · Walter M Mayor · Omar U. Florez · Rosina Weber · Rocio Zorrilla
  • 2020 Workshop: Object Representations for Learning and Reasoning »
    William Agnew · Rim Assouel · Michael Chang · Antonia Creswell · Eliza Kosoy · Aravind Rajeswaran · Sjoerd van Steenkiste
  • 2020 : Introduction »
    William Agnew
  • 2020 Workshop: Resistance AI Workshop »
    Suzanne Kite · Mattie Tesfaldet · J Khadijah Abdurahman · William Agnew · Elliot Creager · Agata Foryciarz · Raphael Gontijo Lopes · Pratyusha Kalluri · Marie-Therese Png · Manuel Sabin · Maria Skoularidou · Ramon Vilarino · Rose Wang · Sayash Kapoor · Micah Carroll
  • 2020 Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition »
    Borja Balle · James Bell · Aurélien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller
  • 2020 : Laura Montoya Q&A »
    Laura Montoya · William Agnew
  • 2019 Workshop: Privacy in Machine Learning (PriML) »
    Borja Balle · Kamalika Chaudhuri · Antti Honkela · Antti Koskela · Casey Meehan · Mi Jung Park · Mary Anne Smart · Mary Anne Smart · Adrian Weller
  • 2018 : Poster Session 1 + Coffee »
    Tom Van de Wiele · Rui Zhao · JFernando Hernandez-Garcia · Fabio Pardo · Xian Yeow Lee · Xiaolin Andy Li · Marcin Andrychowicz · Jie Tang · Suraj Nair · Juhyeon Lee · Cédric Colas · Ali Eslami · Yen-Chen Wu · Stephen McAleer · Ryan Julian · Yang Xue · Matthia Sabatelli · Pranav Shyam · Alexandros Kalousis · Giovanni Montana · Emanuele Pesce · Felix Leibfried · Zhanpeng He · Chunxiao Liu · Yanjun Li · Yoshihide Sawada · Alexander Pashevich · Tejas Kulkarni · Keiran Paster · Luca Rigazio · Quan Vuong · Hyunggon Park · Minhae Kwon · Rivindu Weerasekera · Shamane Siriwardhanaa · Rui Wang · Ozsel Kilinc · Keith Ross · Yizhou Wang · Simon Schmitt · Thomas Anthony · Evan Cater · Forest Agostinelli · Tegg Sung · Shirou Maruyama · Alex Shmakov · Devin Schwab · Mohammad Firouzi · Glen Berseth · Denis Osipychev · Jesse Farebrother · Jianlan Luo · William Agnew · Peter Vrancx · Jonathan Heek · Catalin Ionescu Ionescu · Haiyan Yin · Megumi Miyashita · Nathan Jay · Noga H. Rotman · Sam Leroux · Shaileshh Bojja Venkatakrishnan · Henri Schmidt · Jack Terwilliger · Ishan Durugkar · Jonathan Sauder · David Kas · Arash Tavakoli · Alain-Sam Cohen · Philip Bontrager · Adam Lerer · Thomas Paine · Ahmed Khalifa · Rubén Rodriguez · Avi Singh · Yiming Zhang