Skip to yearly menu bar Skip to main content


Poster

Knowledge Composition using Task Vectors with Learned Anisotropic Scaling

Frederic Z. Zhang · Paul Albert · Cristian Rodriguez-Opazo · Ehsan Abbasnejad · Anton van den Hengel

East Exhibit Hall A-C #3506
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Pre-trained models produce strong generic representations that can be adapted by fine-tuning on specialized datasets. The learned weight difference relative to the pre-trained model, known as a task vector, characterizes the direction and stride of fine-tuning that enable the model to capture these specialized representations. The significance of task vectors is such that simple arithmetic operations on them can be used to combine diverse representations from different domains. This paper builds on these properties of task vectors and aims to answer (1) whether components of task vectors, particularly parameter blocks, exhibit similar characteristics, and (2) how such blocks can be used to enhance knowledge composition and transfer. To this end, we introduce aTLAS, which linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level. Our intuition is that parameter blocks learn distinct representations at various semantic levels, and composition at the block level enables modular learning that leverages these learned representations more effectively, thereby reducing the dependency on large amounts of data. We demonstrate the effectiveness of the15proposed method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives. In particular, we show that (1) learned anisotropic scaling allows task vectors to be more disentangled, causing less interference in composition; (2) task vector composition excels with scarce or no labeled data and is less prone to domain shift, thus leading to better generalizability; (3) mixing the most informative parameter blocks across different task vectors prior to training can reduce the memory footprint and improve the flexibility of knowledge transfer. Moreover, we show the potential of aTLAS as a parameter-efficient fine-tuning method, particularly with less data, and demonstrate it can be easily scaled up for higher performance.

Live content is unavailable. Log in and register to view live content