Timezone: »

FAMO: Fast Adaptive Multitask Optimization
Bo Liu · Yihao Feng · Peter Stone · Qiang Liu

Thu Dec 14 03:00 PM -- 05:00 PM (PST) @ Great Hall & Hall B1+B2 #1221
Event URL: https://github.com/Cranial-XIX/FAMO.git »
One of the grand enduring goals of AI is to create generalist agents that can learn multiple different tasks from diverse data via multitask learning (MTL). However, in practice, applying gradient descent (GD) on the average loss across all tasks may yield poor multitask performance due to severe under-optimization of certain tasks. Previous approaches that manipulate task gradients for a more balanced loss decrease require storing and computing all task gradients ($\mathcal{O}(k)$ space and time where $k$ is the number of tasks), limiting their use in large-scale scenarios. In this work, we introduce Fast Adaptive Multitask Optimization (FAMO), a dynamic weighting method that decreases task losses in a balanced way using $\mathcal{O}(1)$ space and time. We conduct an extensive set of experiments covering multi-task supervised and reinforcement learning problems. Our results indicate that FAMO achieves comparable or superior performance to state-of-the-art gradient manipulation techniques while offering significant improvements in space and computational efficiency. Code is available at \url{https://github.com/Cranial-XIX/FAMO}.

Author Information

Bo Liu (The University of Texas at Austin)
Yihao Feng (Salesforce Research)

Researcher from Salesforce Research

Peter Stone (The University of Texas at Austin, Sony AI)
Qiang Liu (Dartmouth College)

More from the Same Authors