Fair Supervised Learning Through Constraints on Smooth Nonconvex Unfairness-Measure Surrogates
Abstract
A new strategy for fair supervised machine learning (ML) is proposed. Its advantages compared to others are as follows. (a) We introduce a new smooth nonconvex surrogate to approximate the Heaviside functions involved in discontinuous unfairness measures. The surrogate is based on smoothing methods from the optimization literature. The surrogate is a tight approximation that ensures the trained prediction models are fair, as opposed to other (e.g., convex) surrogates that can fail to lead to fair prediction models. (b) Rather than rely on regularizers (that lead to optimization problems that are difficult to solve) and corresponding regularization parameters (that can be expensive to tune), we propose a strategy that employs hard constraints so that specific tolerances for unfairness can be enforced without the complications associated with the use of regularization. (c) Our strategy readily allows for constraints on multiple (potentially conflicting) unfairness measures at the same time. Multiple measures can be considered with a regularization approach, but at the cost of having even more difficult training problems and further expense for tuning. By contrast, through hard constraints, our strategy leads to training problems that can be solved tractably through minimal tuning.