Distinguishing probabilistic from non-probabilistic neural representations
Abstract
The precise neural mechanisms of probabilistic computation remain unknown despite growing evidence that humans track their uncertainty. Recent work has proposed that probabilistic representations arise naturally in task-optimized neural networks. However, prior work did not explicitly examine whether the neural code merely re-represents inputs or performs useful transformations characteristic of probabilistic computation. Using a novel probing-based approach, we show feedforward networks trained to perform cue combination and coordinate transformation without probabilistic objectives encode Bayesian posteriors in their hidden layer activities, but these networks fail to compress their inputs in a task-optimal way, instead performing heuristic computations akin to input re-representation. Therefore, it remains an open question under what conditions truly probabilistic representations emerge in neural networks.