FairPO: Fair Preference Optimization for Multi-Label Learning
Abstract
Multi-label classification (MLC) often suffers from performance disparities across labels. We propose FairPO, a framework combining preference-based loss and group-robust optimization to improve fairness by targeting underperforming labels. FairPO partitions labels into a privileged set for targeted improvement and a non-privileged set to maintain baseline performance. For privileged labels, a DPO-inspired preference loss addresses hard examples by correcting ranking errors between true labels and their confusing counterparts. A constrained objective maintains performance for non-privileged labels, while a Group Robust Preference Optimization (GRPO) formulation adaptively balances both objectives to mitigate bias. We also demonstrate FairPO's versatility with reference-free variants using Contrastive (CPO) and Simple (SimPO) Preference Optimization. Our code is available at GitHub: https://anonymous.4open.science/r/FairPO.