Geometric Priors for Generalizable World Models via Vector Symbolic Architecture
William Chung · Calvin Yeung · Hansen Lillemark · Zhuowen Zou · Xiangjian Liu · Mohsen Imani
Abstract
A key challenge in artificial intelligence and neuroscience is understanding how neural systems learn representations that capture the underlying dynamics of the world. Most world models represent the transition function with unstructured neural networks, limiting interpretability, sample efficiency, and generalization to unseen states or action compositions. We address these issues with a generalizable world model grounded in \textit{Vector Symbolic Architecture} (VSA) principles as geometric priors. Our approach utilizes learnable Fourier Holographic Reduced Representation (FHRR) encoders to map states and actions into a high-dimensional complex vector space with learned group structure and models transitions with element-wise complex multiplication. We formalize the frameworkâs group-theoretic foundation and show how training such structured representations to be approximately invariant enables strong multi-step composition directly in latent space and generalization performances over various experiments. On a discrete grid world environment, our model achieves 87.5\% zero-shot accuracy to unseen state-action pairs, obtains 53.6\% higher accuracy on 20-timestep horizon rollouts, and demonstrates $4\times$ higher robustness to noise relative to an MLP baseline. These results highlight how training to have latent group structure yields generalizable, data-efficient, and interpretable world models, providing a principled pathway toward structured models for real-world planning and reasoning.
Chat is not available.
Successful Page Load