A Systematic Evaluation of Preference Aggregation in Federated RLHF for Pluralistic Alignment of LLMs
Abstract
This paper addresses the challenge of aligning Large Language Models (LLMs) with diverse human preference within Federated Learning (FL) environments, where standard methods often fail to adequately represent diverse viewpoints. We introduce a comprehensive evaluation framework that systematically assesses the trade-off between alignment quality and fairness when using different aggregation strategies for human preferences. Specifically, we evaluate standard aggregation techniques—Min, Max, and Average—and introduce a novel adaptive scheme that dynamically adjusts preference weights based on a group's historical alignment performance. Our experiments on Q/A tasks using a PPO-based RLHF pipeline demonstrate that our adaptive approach consistently achieves superior fairness, while maintaining competitive alignment scores. This work offers a robust methodology for evaluating LLM behavior across diverse populations and provides a practical solution for developing truly pluralistic and fairly aligned models.