Timezone: »

On Human-Aligned Risk Minimization
Liu Leqi · Adarsh Prasad · Pradeep Ravikumar

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #86

The statistical decision theoretic foundations of modern machine learning have largely focused on the minimization of the expectation of some loss function for a given task. However, seminal results in behavioral economics have shown that human decision-making is based on different risk measures than the expectation of any given loss function. In this paper, we pose the following simple question: in contrast to minimizing expected loss, could we minimize a better human-aligned risk measure? While this might not seem natural at first glance, we analyze the properties of such a revised risk measure, and surprisingly show that it might also better align with additional desiderata like fairness that have attracted considerable recent attention. We focus in particular on a class of human-aligned risk measures inspired by cumulative prospect theory. We empirically study these risk measures, and demonstrate their improved performance on desiderata such as fairness, in contrast to the traditional workhorse of expected loss minimization.

Author Information

Liu Leqi (Carnegie Mellon University)
Adarsh Prasad (Carnegie Mellon University)
Pradeep Ravikumar (Carnegie Mellon University)

More from the Same Authors