Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: UniReps: Unifying Representations in Neural Models

Understanding Learning Dynamics of Neural Representations via Feature Visualization at Scale

Chandana Kuntala · Deepak Sharma · Carlos Ponce · Binxu Wang


Abstract:

How does feature learning happen during the training of a neural network? We developed an accelerated pipeline to synthesize maximally activating images ("prototypes") for hidden units in a parallel fashion. Through this, we were able to perform feature visualization at scale, and to track the emergence and development of visual features across the training of neural networks. Using this technique, we studied the `developmental' process of features in a convolutional neural network trained from scratch using SimCLR with or without color jittering augmentation. After creating over one million prototypes with our method, tracking and comparing these visual signatures showed that the color-jittering augmentation led to constantly diversifying high-level features during training, while no color-jittering led to more diverse low-level features but less development of high-level features. These results illustrate how feature visualization can be used to understand training dynamics under different training objectives and data distribution.

Chat is not available.