Latent Nonlinear Wave Dynamics in Image Datasets and Autoencoder Reconstructions
Abstract
Many deep learning models implicitly structure data by evolving it through layers in a way that resembles the flow of a dynamical system. In this work, we investigate the structure of classical image datasets through the lens of dynamical systems and partial differential equations (PDEs). We demonstrate that several widely used image datasets can be effectively modeled by a simple wave-like PDE in the latent embedding, revealing an underlying geometric and temporal coherence. Furthermore, we show that when these datasets are passed through a basic autoencoder, the reconstructed data preserves the wave-like structure up to a high-frequency cutoff. Remarkably, this indicates that neural networks inherently respect the dataset’s underlying dynamics, suggesting that PDE-inspired approaches can potentially guide the design of more structured and efficient representations. These findings offer new insights into the interplay between data geometry and deep learning, and suggest that viewing datasets through the framework of PDE dynamics may yield fruitful directions forrepresentation learning.