Unifying Gestalt Principles Through Inference-Time Prior Integration
Abstract
Gestalt principles such as closure, similarity, continuation, and figure--ground segregation have often been characterized as distinct rules guiding perceptual organization. We demonstrate that these diverse phenomena emerge from a single computational mechanism: inference-time prior integration. Our algorithm, Prior-Guided Drift Diffusion (PGDD), repurposes the same feedback pathways used during backpropagation to refine neural activations during inference, enabling networks to integrate learned statistical regularities with sensory input. Applied to pre-trained networks, PGDD reproduces illusory contours, perceptual grouping, and figure-ground segregation without additional training or architectural modifications. These effects depend on appropriate learned priors, for example, networks trained on object-centric datasets generate Kanizsa illusions, while those trained only on faces or scenes fail to do so. Our results suggest that Gestalt principles are not hardwired perceptual rules but emergent consequences of how neural networks can dynamically combine learned statistical knowledge with incoming sensory evidence. This computational framework bridges artificial and biological vision by showing how inference-time optimization can account for fundamental aspects of human perceptual organization.