Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the lens of Causality and Robustness

Balancing Robustness and Fairness via Partial Invariance

Moulik Choraria · Ibtihal Ferwana · Ankur Mani · Lav Varshney


Abstract:

The Invariant Risk Minimization (IRM) framework aims to learn invariant features for out-of-distribution generalization with the assumption that the underlying causal mechanisms remain constant. In other words, environments should sufficiently overlap'' for finding meaningful invariant features. However, there are cases where theoverlap'' assumption may not hold and further, the assignment of the training samples to different environments is not known a priori. We believe that such cases arise naturally in networked settings and hierarchical data-generating models, wherein the IRM performance degrades. To mitigate this failure case, we argue for a partial invariance framework that minimizes risk fairly across environments. This introduces flexibility into the IRM framework by partitioning the environments based on hierarchical differences, while introducing invariance locally within the partitions. We motivate this framework in classification settings where distribution shifts vary across environments. Our results show the capability of the partial invariant risk minimization to alleviate the trade-off between fairness and risk at different distribution shifts settings.