Neurosymbolic Rabbit Brain: Fractal Attractor Geometry for Neural Representations
Jhet Chan
Abstract
Representation learning in modern neural networks is typically grounded in the assumption that data lie on or near smooth manifolds. This supports continuity and gradient-based optimization but can make it difficult to express stable, discrete, or symbolic categories. We introduce Neurosymbolic Rabbit Brain, a framework that models representations using fractal attractor geometry, where categories are defined by basin membership under simple iterative maps. As a minimal instantiation, we implement a two-Julia escape-time comparator and evaluate it on the Two Spirals benchmark using CMA-ES. Across 10 runs, the baseline model achieves $54.3\% \pm 2.1\%$ test accuracy, exceeding the logistic baseline ($\sim 50\%$). An enhanced variant which preserves the same eight parameters but adds log--polar prewarp, smooth escape-time scoring, a curriculum on iteration depth, and multiple restarts, improves robustness and reaches $61.9\% \pm 2.1\%$. While not competitive with RBF--SVM, these results demonstrate that attractor-based basin geometry can function as a simple and transparent classifier on nonlinear structure, suggesting potential for hybrid systems that pair continuous manifold encoders with discrete fractal partitions.
Chat is not available.
Successful Page Load