Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap: from Machine Learning Research to Clinical Practice

Longitudinal Fairness with Censorship

Wenbin Zhang · Jeremy Weiss


Abstract:

Recent work in artificial intelligence fairness tackles discrimination by constraining optimization programs to achieve parity of some fairness statistic. Most assume certainty on the class label which is impractical in many clinical practices, such as risk stratification, medication-assisted treatment and precision medicine. Instead, we consider fairness in longitudinal censored decision making environments, where the time to an event of interest might be unknown for a subset of the study group, resulting in censorship on the class label and inapplicability of existing fairness studies. To this end, we extend and devise applicable fairness statistics as well as a new debiasing algorithm, thus providing necessary complements to these important socially sensitive tasks. Experiments on real-world censored and discriminated datasets illustrate and confirm the utility of our approach.

Chat is not available.