Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

Just Following AI Orders: When Unbiased People Are Influenced By Biased AI

Hammaad Adam · Aparna Balagopalan · Emily Alsentzer · Fotini Christia · Marzyeh Ghassemi


Abstract:

Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups; however, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine. In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We found that although respondent decisions were not biased without advice, both clinicians and non-experts were influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, we also found that using descriptive flags rather than prescriptive recommendations allowed respondents to retain their original, unbiased decision-making. Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.

Chat is not available.