Decoding Projections From Frozen Random Weights in Autoencoders: What Information Do They Encode?
Nancy Thomas · Keshav Ramani · Annita Vapsi · Daniel Borrajo
Abstract
Despite the widespread use of gradient-based training, neural networks without gradient updates remain largely unexplored. To examine these networks, this paper utilizes an image autoencoder to decode embeddings from an encoder with fixed random weights. Our experiments span three datasets, six latent dimensions, and 28 initialization configurations. Through these experiments we demonstrate the capability of random weights to capture broad structural themes from the input and we make a case for their adoption as baseline models.
Chat is not available.
Successful Page Load