Timezone: »

Diffused Redundancy in Pre-trained Representations
Vedant Nanda · Till Speicher · John Dickerson · Krishna Gummadi · Soheil Feizi · Adrian Weller

Wed Dec 13 08:45 AM -- 10:45 AM (PST) @ Great Hall & Hall B1+B2 #634
Representations learned by pre-training a neural network on a large dataset are increasingly used successfully to perform a variety of downstream tasks. In this work, we take a closer look at how features are encoded in such pre-trained representations. We find that learned representations in a given layer exhibit a degree of diffuse redundancy, ie, any randomly chosen subset of neurons in the layer that is larger than a threshold size shares a large degree of similarity with the full layer and is able to perform similarly as the whole layer on a variety of downstream tasks. For example, a linear probe trained on $20\%$ of randomly picked neurons from the penultimate layer of a ResNet50 pre-trained on ImageNet1k achieves an accuracy within $5\%$ of a linear probe trained on the full layer of neurons for downstream CIFAR10 classification. We conduct experiments on different neural architectures (including CNNs and Transformers) pre-trained on both ImageNet1k and ImageNet21k and evaluate a variety of downstream tasks taken from the VTAB benchmark. We find that the loss \& dataset used during pre-training largely govern the degree of diffuse redundancy and the "critical mass" of neurons needed often depends on the downstream task, suggesting that there is a task-inherent redundancy-performance Pareto frontier. Our findings shed light on the nature of representations learned by pre-trained deep neural networks and suggest that entire layers might not be necessary to perform many downstream tasks. We investigate the potential for exploiting this redundancy to achieve efficient generalization for downstream tasks and also draw caution to certain possible unintended consequences. Our code is available at \url{https://github.com/nvedant07/diffused-redundancy}.

Author Information

Vedant Nanda (University of Maryland & MPI-SWS)
Till Speicher (MPI-SWS)
John Dickerson (Arthur AI & University of Maryland)
Krishna Gummadi (Max Planck Institute for Software Systems)
Soheil Feizi (University of Maryland)
Adrian Weller (Cambridge, Alan Turing Institute)
Adrian Weller

Adrian Weller MBE is a Director of Research in Machine Learning at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. He is a Turing AI Fellow in Trustworthy Machine Learning, and heads Safe and Ethical AI at The Alan Turing Institute, the UK national institute for data science and AI. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards and previously held senior roles in finance.

More from the Same Authors