Timezone: »
Machine learning models have demonstrated promising performances in many areas. However, the concerns that they can be biased against specific groups hinder their adoption in high-stake applications. Thus, it is essential to ensure fairness in machine learning models. Most of the previous efforts require access to sensitive attributes for mitigating bias. Nevertheless, it is often infeasible to obtain a large scale of data with sensitive attributes due to people's increasing awareness of privacy and the legal compliance. Therefore, an important research question is how to make fair predictions under privacy. In this paper, we study a novel problem of fair classification in a semi-private setting, where most of the sensitive attributes are private and only a small amount of clean ones are available. To this end, we propose a novel framework FairSP that can first learn to correct the noisy sensitive attributes under the privacy guarantee by exploiting the limited clean ones. Then, it jointly models the corrected and clean data in an adversarial way for debiasing and prediction. Theoretical analysis shows that the proposed model can ensure fairness when most sensitive attributes are private. Extensive experimental results in real-world datasets demonstrate the effectiveness of the proposed model for making fair predictions under privacy and maintaining high accuracy.
Author Information
Canyu Chen (Illinois Institute of Technology)
Yueqing Liang (Illinois Institute of Technology)
Xiongxiao Xu (Illinois Institute of Technology)
Shangyu Xie (Illinois Institute of Technology)
Yuan Hong (University of Connecticut)
Kai Shu (Illinois Institute of Technology)
More from the Same Authors
-
2022 : PromptDA: Label-guided Data Augmentation for Prompt-based Few Shot Learners »
Canyu Chen · Kai Shu -
2022 : When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes »
Canyu Chen · Yueqing Liang · Xiongxiao Xu · Shangyu Xie · Yuan Hong · Kai Shu -
2023 : Can LLM-Generated Misinformation Be Detected? »
Canyu Chen · Kai Shu -
2023 : Can LLM-Generated Misinformation Be Detected? »
Canyu Chen · Kai Shu -
2023 : Can LLM-Generated Misinformation Be Detected? »
Canyu Chen · Kai Shu -
2023 : Can LLM-Generated Misinformation Be Detected? »
Canyu Chen · Kai Shu -
2022 : PromptDA: Label-guided Data Augmentation for Prompt-based Few Shot Learners »
Canyu Chen · Kai Shu -
2022 Poster: BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs »
Kay Liu · Yingtong Dou · Yue Zhao · Xueying Ding · Xiyang Hu · Ruitong Zhang · Kaize Ding · Canyu Chen · Hao Peng · Kai Shu · Lichao Sun · Jundong Li · George H Chen · Zhihao Jia · Philip S Yu