Skip to yearly menu bar Skip to main content

Workshop: Workshop on Human and Machine Decisions

In silico manipulation of human cortical computation underlying goal-directed learning

Jaehoon Shin · Jee Hang Lee · Sang Wan Lee


This paper explores the possibility that the RL algorithm can control human goal-directed learning at both behavioral and neural levels. The proposed framework is based on an asymmetric two-player game setting: while a computational model of human RL (a cognitive model) performs a goal-conditioned two-stage Markov decision task, an RL algorithm (a task controller) learns a behavioral policy to drive the key variable (i.e., state prediction error) of the cognitive model to the arbitrarily chosen state, by manipulating the task parameters (i.e., state-action-state transition uncertainty and goal conditions) on a trial-by-trial basis. We fitted the cognitive models individually to 82 human subjects' data, and subsequently used them to train the task controller in two different scenarios, minimizing and maximizing state prediction error, each of which is known to improve and reduce the motivation for goal-directed learning, respectively. The model permutation analysis revealed a subject-independent task control policy, suggesting that the task controller pre-trained with cognitive models in-silico could generalize to actual human subjects without further training. To directly test the efficacy of our framework, we ran fMRI experiments on another 21 human subjects. The behavioral analysis confirmed that the pre-trained task controller successfully manipulates human goal-directed learning. Notably, we found neural effects of the task control on the insular and lateral prefrontal cortex, the cortical regions known to encode state prediction error signals during goal-directed learning. Our framework can be implemented with any RL algorithm, making it possible to guide various types of human-computer interaction.

Chat is not available.