Skip to yearly menu bar Skip to main content


Poster
in
Workshop: HCAI@NeurIPS 2022, Human Centered AI

Human-in-the-loop Bias Mitigation in Data Science

Romila Pradhan · Tianyi Li

Keywords: [ human-in-the-loop ] [ Fairness ] [ bias mitigation ]


Abstract:

With the successful adoption of machine learning (ML) in decision making, there have been growing concerns around the transparency and fairness of ML models leading to significant advances in the field of eXplainable Artificial Intelligence (XAI). Generating explanations using existing techniques in XAI and merely reporting model bias, however, are insufficient to locate and mitigate sources of bias. In line with the data-centric AI movement, we posit that to mitigate bias, we must solve the myriad data errors and biases inherent in the data, and propose a human-machine framework that strengthens human engagement with data to remedy data errors and data biases toward building fair and trustworthy AI systems.

Chat is not available.