Timezone: »
The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature. Recent works, however, observe that errors in socially-salient (including protected) attributes of items can significantly undermine fairness guarantees of existing fair-ranking algorithms and raise the problem of mitigating the effect of such errors. We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed. We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes. We provide provable guarantees on the fairness and utility attainable by our framework and show that it is information-theoretically impossible to significantly beat these guarantees. Our framework works for multiple non-disjoint attributes and a general class of fairness constraints that includes proportional and equal representation. Empirically, we observe that, compared to baselines, our algorithm outputs rankings with higher fairness, and has a similar or better fairness-utility trade-off compared to baselines.
Author Information
Anay Mehrotra (Yale University)
Nisheeth Vishnoi (Yale University)
More from the Same Authors
-
2021 Spotlight: Coresets for Time Series Clustering »
Lingxiao Huang · K Sudhir · Nisheeth Vishnoi -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Spotlight: Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion »
Oren Mangoubi · Nisheeth Vishnoi -
2022 Spotlight: Sampling from Log-Concave Distributions with Infinity-Distance Guarantees »
Oren Mangoubi · Nisheeth Vishnoi -
2022 Spotlight: Lightning Talks 2A-1 »
Caio Kalil Lauand · Ryan Strauss · Yasong Feng · lingyu gu · Alireza Fathollah Pour · Oren Mangoubi · Jianhao Ma · Binghui Li · Hassan Ashtiani · Yongqi Du · Salar Fattahi · Sean Meyn · Jikai Jin · Nisheeth Vishnoi · zengfeng Huang · Junier B Oliva · yuan zhang · Han Zhong · Tianyu Wang · John Hopcroft · Di Xie · Shiliang Pu · Liwei Wang · Robert Qiu · Zhenyu Liao -
2022 Poster: Sampling from Log-Concave Distributions with Infinity-Distance Guarantees »
Oren Mangoubi · Nisheeth Vishnoi -
2022 Poster: Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion »
Oren Mangoubi · Nisheeth Vishnoi -
2021 Poster: Fair Classification with Adversarial Perturbations »
L. Elisa Celis · Anay Mehrotra · Nisheeth Vishnoi -
2021 Poster: Coresets for Time Series Clustering »
Lingxiao Huang · K Sudhir · Nisheeth Vishnoi -
2020 Poster: Coresets for Regressions with Panel Data »
Lingxiao Huang · K Sudhir · Nisheeth Vishnoi -
2019 Poster: Online sampling from log-concave distributions »
Holden Lee · Oren Mangoubi · Nisheeth Vishnoi -
2019 Poster: Coresets for Clustering with Fairness Constraints »
Lingxiao Huang · Shaofeng Jiang · Nisheeth Vishnoi