Model Transferability Informed by Embedding’s Topology
Abstract
In this work, we tackle the challenge of predicting the performance of a pre-trained classification model on a downstream task before fine-tuning. Our approach leverages the geometric information encoded in the feature embeddings of pre-trained networks, which we analyze using persistent diagrams generated from a Vietoris-Rips filtration. We find that during late-stage training, the separation between the highest-persistence features and the remaining low-persistence features mirrors the dynamics of neural collapse. However, our topological measures differ significantly during early training as the geometrical structure of the embeddings stabilizes. We propose a transferability score based on the ratio of these topological features. We evaluated its performance in ranking models for fine-tuning and showed that it achieves competitive results against established methods.