Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

Functional Modularity in Mind and Machine

Devon Jarvis · Richard Klein · Benjamin Rosman · Andrew Saxe

[ ] [ Project Page ]
 
presentation: UniReps: Unifying Representations in Neural Models
Fri 15 Dec 6:15 a.m. PST — 3:15 p.m. PST

Abstract:

Modularity is a well established and foundational organisational principle of the brain. Neural modules are composed of neurons which are selective to particular sensory input or situations and tend to be organised in close proximity. Yet, establishing which neurons are coupled to implement a neural module is difficult to determine. Consequently, establishing the specifics of what exactly a neural module is selective for is also difficult. In both cases this is due to the difference between functional and architectural modularity. Architectural modularity results due to the explicit connections between neurons in a network. Thus, neurons which are connected form a module and the physical module can be probed to determine what it is selective for. Functional modularity, however, is only detectable in the behaviour of a subset of neurons in the network, but has no explicit pressure forcing its emergence outside of the learning algorithm interacting with the statistics of sensory experience. Thus, while we understand how broad regions of the brain are connected, more nuance is still required to obtain a better understanding of the degree of modularity. This problem is not limited, however, to biological neural networks, but artificial ones as well. ReLU networks for example have the ability to switch off regions of the hidden layer depending on the input data being presented. However, what each hidden neuron is selective for, which hidden neurons are functionally coupled and the meso-scale behaviour of the hidden layer is not well understood. In this work, we begin to understand the emergence and behaviour of functional neural modules in both ReLU and biological neural networks. We achieve this by drawing an equivalence between Gated Deep Linear Networks (GDLNs) and the respective networks by mapping from functional neural modules onto architectural modules of the GDLN. Through the lens of the GDLN we are able to obtain a number of insights for how information is distributed in artificial and biological brains to support context-sensitive controlled semantic cognition.

Chat is not available.