Skip to yearly menu bar Skip to main content

Workshop: Symmetry and Geometry in Neural Representations

Pitfalls in Measuring Neural Transferability

Suryaka Suresh · Vinayak Abrol · Anshul Thakur


Transferability scores quantify the aptness of the pre-trained models for a downstream task and help in selecting an optimal pre-trained model for transfer learning. This work aims to draw attention to the significant shortcomings of state-of-the-art transferability scores. To this aim, we propose neural collapse-based transferability scores that analyse intra-class variability collapse and inter-class discriminative ability of the penultimate embedding space of a pre-trained model. The experimentation across the image and audio domains demonstrates that such a simple variability analysis of the feature space is more than enough to satisfy the current definition of transferability scores, and there is a requirement for a new generic definition of transferability. Further, building on these results, we highlight new research directions and postulate characteristics of an ideal transferability measure that will be helpful in streamlining future studies targeting this problem.

Chat is not available.