Skip to yearly menu bar Skip to main content


Poster

RTify: Aligning Deep Neural Networks with Human Behavioral Decisions

Yu-Ang Cheng · Ivan F Rodriguez Rodriguez · Sixuan Chen · Takeo Watanabe · Thomas Serre

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Current neural network models of visual recognition largely focus on replicating human accuracy and behavioral choices, often neglecting visual perception's dynamic, temporal nature. Here, we introduce a novel computational framework to model human behavioral decisions by learning to align the temporal dynamics of a recurrent neural network (RNN) to human reaction times (RTs). We first demonstrate that RNNs can be optimized so that the number of time steps required to solve a task matches RTs recorded in psychophysics experiments. Next, we develop an ideal-observer RNN model explicitly trained to balance classification accuracy with the computational time required for solving the task. The success of the resulting model in accounting for human RT data provides suggestive evidence that human observers optimally balance speed and accuracy. Additionally, we enhance the classical Wong-Wang RNN model of decision-making to support the learning of multi-class classification problems, which we integrate with a convolutional neural network model of visual perception. We validate our results using both classic psychophysics stimuli and natural object datasets. Overall, we present a novel framework that effectively helps align current vision models with the full spectrum of human behavioral data, bringing us closer to an integrated model of human vision.

Live content is unavailable. Log in and register to view live content