Selective Preference Aggregation
Shreyas Kadekodi · Hayden McTavish · Berk Ustun
Abstract
Many applications in machine learning and decision making rely on procedures to aggregate human preferences. In such tasks, individuals express ordinal preferences over a set of items by voting, rating, or comparing them. We then aggregate these data into a ranking that reveals their collective preferences. Standard methods for preference aggregation are designed to return rankings that arbitrate conflicting preferences between individuals. In this work, we introduce a paradigm for \emph{selective aggregation} where we abstain from comparison rather than arbitrate dissent. We summarize collective preferences as a \emph{selective ranking} -- i.e., a partial order that reflects all collective preferences where at least $100\cdot(1 - \dissent{})\%$ of individuals agree. We develop algorithms to build selective rankings that achieve all possible trade-offs between comparability and disagreement, and derive formal guarantees on their recovery and robustness. We conduct an extensive set of experiments on real-world datasets to benchmark our approach and demonstrate its functionality. We conduct an extensive set of experiments on real-world datasets to benchmark our approach and demonstrate its functionality. Selective rankings provide a simple collective lever: set $\dissent$ to expose disagreement, abstain rather than arbitrate, and constrain downstream algorithms to consensus.
Chat is not available.
Successful Page Load