Neural Reduced Potential via Persistent Homology
Yoh-ichi Mototake
Abstract
Constructing reduced models of gradient systems from high-dimensional data is challenging, as image-based latent spaces often require high dimensions and lack robustness. We propose a framework that integrates persistent homology with Neural Reduced Potential modeling. Time-series images are transformed into persistent diagrams (PDs), vectorized, and encoded by an autoencoder, and a neural network $V_{\mathrm{NN}}$, inspired by Hamiltonian neural networks, learns the reduced potential. Applied to magnetic domain dynamics modeled by the time-dependent Ginzburg--Landau equation, our method reproduces gradient behavior, accurately reconstructs and predicts PD evolution, and yields smooth, low-dimensional latent dynamics with respect to anisotropy. These results demonstrate the advantage of topological descriptors for interpretable and efficient data-driven modeling of physical systems.
Chat is not available.
Successful Page Load