Mamba-GINR: A Scalable Framework for Spatiotemporal Representation of fMRI
Thilina Balasooriya · Jubin Choi · Kevin Valencia · Xihaier Luo · Shinjae Yoo · David Park
Abstract
Generalizable implicit neural representations (GINRs) are a powerful paradigm for modeling large-scale functional MRI (fMRI) data, but their adoption is blocked by a key modeling challenge. Prior GINRs built on Transformers cannot scale to 4D fMRI due to the quadratic complexity of attention, preventing promising applications like data compression, temporal interpolation, and representation learning for large-scale scientific data. This work introduces Mamba-GINR, a framework that leverages Mamba as a backbone aiming for linear-time scaling. Our results, benchmarked on standard image datasets (CIFAR-10, CelebA), show it achieves superior reconstruction quality. Critically, we demonstrate Mamba’s superior scalability on GINR: it significantly outperforms baselines given an identical token budget and was the only GINR variant that could successfully model sequences at a scale comparable to 4D fMRI data. Further analysis into the placement of learnable queries and the model's internal time delta ($\Delta$) parameter confirms its ability to create robust, high-fidelity representations. By addressing this critical modeling bottleneck, our work has the potential to make GINRs a more viable tool for fMRI analysis. This advance in scalability could enable the continuous representation of entire fMRI sessions, potentially preserving rich temporal dynamics that are often lost to computational constraints. We present this framework as a foundational tool and invite the neuroscience community to collaborate on applying it to explore complex, long-timescale brain activity in large-scale datasets.
Chat is not available.
Successful Page Load