Building expertise through task-specific representational alignment in biological and artificial neural networks
Abstract
Humans can generate both rapid and accurate responses in diverse tasks by building perceptuo-motor expertise through practice. Expert responses are characterized as being robust to task-irrelevant distractors and state-space nuisance. In this paper, we investigate the representational transformations that guide the process of skill acquisition in both humans and artificial agents. Specifically, we investigate the hypothesis that evolution of task-specific efficient representational coding emerges in the higher layers of the visuo-motor hierarchy in biological and artificial networks. Towards this end, we built a custom shooter game with the specific target of introducing maximal variance in perceptual state spaces, wherein development of expertise entails building robustness to such state-space distortions. Deep reinforcement learning agents playing the game developed representational alignment with the task-relevant features in the higher layers late in the training process with the lower layers remaining agnostic to the task. We aim to investigate parallel representational alignment in humans through longitudinal neural recordings to precisely probe evolution of representational bottlenecks that result in formation of expertise in humans.