HyperPALoRA: Parameter-Efficient Pareto Hypernetworks via Preference-Based Diverse Low-Rank Adaptations
Abstract
Multi-task learning (MTL) is largely addressed within the framework of multi-objective optimization (MOO), where Pareto Front Learning (PFL) methods such as Pareto Hypernetworks (PHNs) have enabled the modeling of continuous Pareto fronts using a single neural network. While PHNs excel at capturing complex relationships between task tradeoffs and solution space, they face scalability, memory, and convergence limitations. In this work, we propose a parameter-efficient PFL approach leveraging low-rank adaptations (LoRA) to enhance the parameter efficiency of PHNs. Our method learns a single low-rank adaptation on top of a backbone network (target network), which is dynamically generated conditioned on task preference, utilizing a hypernetwork. By learning a single preference-aligned LoRA for the target network for each preference, our approach avoids reliance on linear combinations of task-specific modules. To address limited diversity in solutions in prior PFL methods, we introduce a contrastive loss that enforces similarity among neighboring preferences while promoting diversity across distant ones, yielding well-distributed Pareto-optimal solutions. Experiments on standard MTL benchmarks demonstrate that our approach achieves competitive or superior performance with significantly improved parameter efficiency compared to PHN-based PFL methods.