Skip to yearly menu bar Skip to main content


Threshold Learning for Optimal Decision Making

Nathan F Lepora

Area 5+6+7+8 #63

Keywords: [ (Other) Optimization ] [ (Cognitive/Neuroscience) Theoretical Neuroscience ] [ (Cognitive/Neuroscience) Reinforcement Learning ]


Decision making under uncertainty is commonly modelled as a process of competitive stochastic evidence accumulation to threshold (the drift-diffusion model). However, it is unknown how animals learn these decision thresholds. We examine threshold learning by constructing a reward function that averages over many trials to Wald's cost function that defines decision optimality. These rewards are highly stochastic and hence challenging to optimize, which we address in two ways: first, a simple two-factor reward-modulated learning rule derived from Williams' REINFORCE method for neural networks; and second, Bayesian optimization of the reward function with a Gaussian process. Bayesian optimization converges in fewer trials than REINFORCE but is slower computationally with greater variance. The REINFORCE method is also a better model of acquisition behaviour in animals and a similar learning rule has been proposed for modelling basal ganglia function.

Live content is unavailable. Log in and register to view live content