Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)

Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset

Anna Richter · Brooklyn Sheppard · Allison Cohen · Elizabeth Smith · Tamara Kneese · Carolyne Pelletier · Ioana Baldini · Yue Dong


Abstract:

Using novel approaches to dataset development, the Biasly dataset captures the nuance and subtlety of misogyny in ways that are unique within the literature. Built in collaboration with multi-disciplinary experts and annotators themselves, the dataset contains annotations of movie subtitles, capturing colloquial expressions of misogyny in North American film. The dataset can be used for a range of NLP tasks, including classification, severity score regression, and text generation for rewrites. In this paper, we discuss the methodology used, analyze the annotations obtained, and provide baselines using common NLP algorithms in the context of misogyny detection and mitigation. We hope this work will promote AI for social good in NLP for bias detection, explanation and removal.

Chat is not available.