Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Transfer Learning for Natural Language Processing

Do Current Multi-Task Optimization Methods in Deep Learning Even Help?

Derrick Xin · Behrooz Ghorbani · Dami Choi · Ankush Garg · Orhan Firat · Justin Gilmer


Abstract:

Recent research has proposed a series of specialized optimization algorithms for deep multi-task models. It is often claimed that these multi-task optimization (MTO) methods yield solutions that are superior to the ones found by simply optimizing a weighted average of the task losses. In this paper, we perform large-scale experiments on a variety of language and vision tasks to examine the empirical validity of these claims. We show that, despite the added design and computational complexity of these algorithms, MTO methods do not yield any performance improvements beyond what is achievable via traditional optimization approaches. We highlight alternative strategies that consistently yield improvements to the performance profile and point out common training pitfalls that might cause suboptimal results. Finally, we outline challenges in reliably evaluating the performance of MTO algorithms and discuss potential solutions.

Chat is not available.