Information-theoretic Neural Decoding Reproduces Several Laws of Human Behavior
Abstract
Features of tasks and environments are often represented in the brain by neural firing rates. Representations must be decoded to enable downstream actions, and decoding takes time. We describe a toy model with a Poisson process encoder and an ideal observer Bayesian decoder, and show the decoding of rate-coded signals reproduces classic patterns of response time and accuracy observed in humans, including the Hick-Hyman Law, the Power Law of Learning, speed-accuracy trade-offs, and response times matching lognormal distributions. The decoder is equipped with a codebook, a prior distribution over signals, and an entropy stopping threshold. We argue that historical concerns of the applicability of such information-theoretic tools to neural and behavioral data arises from a confusion about the application of discrete-time coding techniques to continuous-time signals.