NIPS 2009
Skip to yearly menu bar Skip to main content


Workshop

Bounded-rational analyses of human cognition: Bayesian models, approximate inference, and the brain

Noah Goodman · Edward Vul · Tom Griffiths · Josh Tenenbaum

Westin: Alpine BC

Bayesian, or "rational", accounts of human cognition have enjoyed much success in recent years: human behavior is well described by probabilistic inference in low-level perceptual and motor tasks as well as high level cognitive tasks like category and concept learning, language, and theory of mind. However, these models are typically defined at the abstract "computational" level: they successfully describe the computational task solved by human cognition without committing to the algorithm which carries it out. Bayesian models usually assume unbounded cognitive resources available for computation, yet traditional cognitive psychology has emphasized the severe limitations of human cognition. Thus, a key challenge for the Bayesian approach to cognition is to describe the algorithms used to cary out approximate probabilistic inference using the bounded computational resources of the human brain.

Inspired by the success of Monte Carlo methods in machine learning, several different groups have suggested that humans make inferences not by manipulating whole distributions, but my drawing a small number of samples from the appropriate posterior distribution. Monte Carlo algorithms are attractive as algorithmic models of cognition both because of they have been used to do inference in a wide variety of structured probabilistic models, scaling to complex situations while minimizing the curse of dimensionality, and because they use resources efficiently, and degrade gracefully when time does not permit many samples to be generated. Indeed, given parsimonious assumptions about the cost of obtaining a sample for a bounded agent, it is often best to make decisions using just one sample.

The claim that human cognition works by sampling identifies the broad class of Monte Carlo algorithms as candidate cognitive process models. Recent evidence from human behavior supports this coarse description of human inference: people seem to operate with a limited set of samples at a time. Further narrowing the class of algorithm makes additional predictions if the samples drawn by these algorithms are imperfect samples (not exact samples from the posterior distribution). That is, while most Monte Carlo algorithms yield unbiased estimators given unlimited resources, they all have characteristic biases and dynamics in practice -- it is these biases and dynamics which result in process-level predictions about human cognition. For instance, it has been argued that the characteristic order effects exhibited by sequential Monte Carlo algorithms (particle filters) when run with few particles can explain the primacy and recency effects observed in human category learning, and the "garden path" phenomena of human sentence processing. Similarly, others have argued that the temporal correlation of samples obtained from Markov Chain Monte Carlo (MCMC) sampling can account for bistable percepts in visual processing.

Ultimately the processes of human cognition must be implemented in the brain. Relatively little work has examined how probabilistic inference may be carried out by neural mechanisms, and even less of this work has been based on Monte Carlo algorithms. Several different neural implementations of probabilistic inference, both approximate and exact, have been proposed, but the relationship among these implementations and to algorithmic and behavioral constraints remains to be understood. Accordingly, this workshop will foster discussion of neural implementations in light of work on bounded-rational cognitive processes.

The goal of this workshop is to explore the connections between Bayesian models of cognition, human cognitive processes, modern inference algorithms, and neural information processing. We believe that this will be an exciting opportunity to make progress on a set of interlocking questions: Can we derive precise predictions about the dynamics of human cognition from state-of-the-art inference algorithms? Can machine learning be improved by understanding the efficiency tradeoffs made by human cognition? Can descriptions of neural behavior be constrained by theories of human inference processes?

Live content is unavailable. Log in and register to view live content