Timezone: »
Poster
Fair Classification with Adversarial Perturbations
L. Elisa Celis · Anay Mehrotra · Nisheeth Vishnoi
We study fair classification in the presence of an omniscient adversary that, given an $\eta$, is allowed to choose an arbitrary $\eta$-fraction of the training samples and arbitrarily perturb their protected attributes. The motivation comes from settings in which protected attributes can be incorrect due to strategic misreporting, malicious actors, or errors in imputation; and prior approaches that make stochastic or independence assumptions on errors may not satisfy their guarantees in this adversarial setting. Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness. Our framework works with multiple and non-binary protected attributes, is designed for the large class of linear-fractional fairness metrics, and can also handle perturbations besides protected attributes. We prove near-tightness of our framework's guarantees for natural hypothesis classes: no algorithm can have significantly better accuracy and any algorithm with better fairness must have lower accuracy. Empirically, we evaluate the classifiers produced by our framework for statistical rate on real-world and synthetic datasets for a family of adversaries.
Author Information
L. Elisa Celis (Yale University)
Anay Mehrotra (Yale University)
Nisheeth Vishnoi (Yale University)
More from the Same Authors
-
2021 Spotlight: Coresets for Time Series Clustering »
Lingxiao Huang · K Sudhir · Nisheeth Vishnoi -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Spotlight: Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion »
Oren Mangoubi · Nisheeth Vishnoi -
2022 Spotlight: Sampling from Log-Concave Distributions with Infinity-Distance Guarantees »
Oren Mangoubi · Nisheeth Vishnoi -
2022 Spotlight: Lightning Talks 2A-1 »
Caio Kalil Lauand · Ryan Strauss · Yasong Feng · lingyu gu · Alireza Fathollah Pour · Oren Mangoubi · Jianhao Ma · Binghui Li · Hassan Ashtiani · Yongqi Du · Salar Fattahi · Sean Meyn · Jikai Jin · Nisheeth Vishnoi · zengfeng Huang · Junier B Oliva · yuan zhang · Han Zhong · Tianyu Wang · John Hopcroft · Di Xie · Shiliang Pu · Liwei Wang · Robert Qiu · Zhenyu Liao -
2022 Poster: Sampling from Log-Concave Distributions with Infinity-Distance Guarantees »
Oren Mangoubi · Nisheeth Vishnoi -
2022 Poster: Fair Ranking with Noisy Protected Attributes »
Anay Mehrotra · Nisheeth Vishnoi -
2022 Poster: Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion »
Oren Mangoubi · Nisheeth Vishnoi -
2021 Poster: Coresets for Time Series Clustering »
Lingxiao Huang · K Sudhir · Nisheeth Vishnoi -
2020 Poster: Coresets for Regressions with Panel Data »
Lingxiao Huang · K Sudhir · Nisheeth Vishnoi -
2019 Poster: Assessing Social and Intersectional Biases in Contextualized Word Representations »
Yi Chern Tan · L. Elisa Celis -
2019 Spotlight: Assessing Social and Intersectional Biases in Contextualized Word Representations »
Yi Chern Tan · L. Elisa Celis -
2019 Poster: Online sampling from log-concave distributions »
Holden Lee · Oren Mangoubi · Nisheeth Vishnoi -
2019 Poster: Coresets for Clustering with Fairness Constraints »
Lingxiao Huang · Shaofeng Jiang · Nisheeth Vishnoi