Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Information-Theoretic Principles in Cognitive Systems

Compression supports low-dimensional representations of behavior across neural circuits

Dale Zhou · Jason Kim · Adam Pines · Valerie Sydnor · David Roalf · John Detre · Ruben Gur · Raquel Gur · Theodore Satterthwaite · Danielle S Bassett


Abstract: Dimensionality reduction, a form of compression, can simplify representations of information to increase efficiency and reveal general patterns. Yet, this simplification also forfeits information, thereby reducing representational capacity. Hence, the brain may benefit from generating both compressed and uncompressed activity, and may do so in a heterogeneous manner across diverse neural circuits that represent low-level (sensory) or high-level (cognitive) stimuli. However, precisely how compression and representational capacity differ across the cortex remains unknown. Here we predict different levels of compression across regional circuits by using random walks on networks to model activity flow, and then we formulate rate-distortion functions, which are the basis of lossy compression. Using a large sample of youth ($n=1,040$), we test predictions in two ways: by measuring the dimensionality of spontaneous activity from sensorimotor to association cortex, and by assessing the representational capacity for 24 behaviors in neural circuits and 20 cognitive variables in recurrent neural networks. Our network theory of compression predicts the dimensionality and representational capacity of biological and artificial networks, thereby advancing understanding of how connectivity supports computational functions that involve compression.

Chat is not available.