Timezone: »
Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space. However, these models often fit behavior more poorly than recurrent neural networks (RNNs), which are more flexible and make fewer assumptions about the underlying decision-making processes. Unfortunately, the parameter and latent activity spaces of RNNs are generally high-dimensional and uninterpretable, making it hard to use them to study individual differences. Here, we show how to benefit from the flexibility of RNNs while representing individual differences in a low-dimensional and interpretable space. To achieve this, we propose a novel end-to-end learning framework in which an encoder is trained to map the behavior of subjects into a low-dimensional latent space. These low-dimensional representations are used to generate the parameters of individual RNNs corresponding to the decision-making process of each subject. We introduce terms into the loss function that ensure that the latent dimensions are informative and disentangled, i.e., encouraged to have distinct effects on behavior. This allows them to align with separate facets of individual differences. We illustrate the performance of our framework on synthetic data as well as a dataset including the behavior of patients with psychiatric disorders.
Author Information
Amir Dezfouli (Data61, CSIRO)
Hassan Ashtiani (McMaster University)
Omar Ghattas (University of Chicago)
Richard Nock (Data61, the Australian National University and the University of Sydney)
Peter Dayan (Max Planck Institute for Biological Cybernetics)
Cheng Soon Ong (Data61 and Australian National University)
Cheng Soon Ong is a principal research scientist at the Machine Learning Research Group, Data61, CSIRO, and is the director of the machine learning and artificial intelligence future science platform at CSIRO. He is also an adjunct associate professor at the Australian National University. He is interested in enabling scientific discovery by extending statistical machine learning methods.
More from the Same Authors
-
2021 Spotlight: Two steps to risk sensitivity »
Christopher Gagne · Peter Dayan -
2021 : Gaussian Process Bandits with Aggregated Feedback »
Mengyan Zhang · Russell Tsuchida · Cheng Soon Ong -
2021 : Catastrophe, Compounding & Consistency in Choice »
Christopher Gagne · Peter Dayan -
2021 : Factorized Fourier Neural Operators »
Alasdair Tran · Alexander Mathews · Lexing Xie · Cheng Soon Ong -
2022 Poster: Benefits of Additive Noise in Composing Classes with Bounded Capacity »
Alireza Fathollah Pour · Hassan Ashtiani -
2022 : When are equilibrium networks scoring algorithms? »
Russell Tsuchida · Cheng Soon Ong -
2022 : A (dis-)information theory of revealed and unrevealed preferences »
Nitay Alon · Lion Schulz · Peter Dayan · Jeffrey S Rosenschein -
2023 Poster: On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences »
Alireza Fathollah Pour · Hassan Ashtiani -
2023 Poster: Reinforcement Learning with Simple Sequence Priors »
Tankred Saanum · Noemi Elteto · Peter Dayan · Marcel Binz · Eric Schulz -
2023 Poster: Squared Neural Families: A New Class of Tractable Density Models »
Russell Tsuchida · Cheng Soon Ong · Dino Sejdinovic -
2023 Poster: The contextual lasso: Sparse linear models via deep neural networks »
Ryan Thompson · Amir Dezfouli · robert kohn -
2023 Poster: Boosting with Tempered Exponential Measures »
Richard Nock · Ehsan Amid · Manfred Warmuth -
2022 Spotlight: Benefits of Additive Noise in Composing Classes with Bounded Capacity »
Alireza Fathollah Pour · Hassan Ashtiani -
2022 Spotlight: Lightning Talks 2A-1 »
Caio Kalil Lauand · Ryan Strauss · Yasong Feng · lingyu gu · Alireza Fathollah Pour · Oren Mangoubi · Jianhao Ma · Binghui Li · Hassan Ashtiani · Yongqi Du · Salar Fattahi · Sean Meyn · Jikai Jin · Nisheeth Vishnoi · zengfeng Huang · Junier B Oliva · yuan zhang · Han Zhong · Tianyu Wang · John Hopcroft · Di Xie · Shiliang Pu · Liwei Wang · Robert Qiu · Zhenyu Liao -
2022 : A (dis-)information theory of revealed and unrevealed preferences »
Nitay Alon · Lion Schulz · Peter Dayan · Jeffrey S Rosenschein -
2022 Poster: Fair Wrapping for Black-box Predictions »
Alexander Soen · Ibrahim Alabdulmohsin · Sanmi Koyejo · Yishay Mansour · Nyalleng Moorosi · Richard Nock · Ke Sun · Lexing Xie -
2021 Poster: Privately Learning Mixtures of Axis-Aligned Gaussians »
Ishaq Aden-Ali · Hassan Ashtiani · Christopher Liaw -
2021 Poster: Two steps to risk sensitivity »
Christopher Gagne · Peter Dayan -
2021 Poster: TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement Learning »
Minchao Wu · Michael Norrish · Christian Walder · Amir Dezfouli -
2020 : Panel Discussions »
Grace Lindsay · George Konidaris · Shakir Mohamed · Kimberly Stachenfeld · Peter Dayan · Yael Niv · Doina Precup · Catherine Hartley · Ishita Dasgupta -
2020 Poster: A Local Temporal Difference Code for Distributional Reinforcement Learning »
Pablo Tano · Peter Dayan · Alexandre Pouget -
2020 Tutorial: (Track1) There and Back Again: A Tale of Slopes and Expectations »
Marc Deisenroth · Cheng Soon Ong -
2019 Poster: A Primal-Dual link between GANs and Autoencoders »
Hisham Husain · Richard Nock · Robert Williamson -
2018 Poster: Representation Learning of Compositional Data »
Marta Avalos · Richard Nock · Cheng Soon Ong · Julien Rouar · Ke Sun -
2018 Poster: Integrated accounts of behavioral and neuroimaging data using flexible recurrent neural network models »
Amir Dezfouli · Richard Morris · Fabio Ramos · Peter Dayan · Bernard Balleine -
2018 Oral: Integrated accounts of behavioral and neuroimaging data using flexible recurrent neural network models »
Amir Dezfouli · Richard Morris · Fabio Ramos · Peter Dayan · Bernard Balleine -
2018 Poster: Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes »
Hassan Ashtiani · Shai Ben-David · Nicholas Harvey · Christopher Liaw · Abbas Mehrabian · Yaniv Plan -
2018 Oral: Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes »
Hassan Ashtiani · Shai Ben-David · Nicholas Harvey · Christopher Liaw · Abbas Mehrabian · Yaniv Plan -
2017 Poster: f-GANs in an Information Geometric Nutshell »
Richard Nock · Zac Cranko · Aditya K Menon · Lizhen Qu · Robert Williamson -
2017 Spotlight: f-GANs in an Information Geometric Nutshell »
Richard Nock · Zac Cranko · Aditya K Menon · Lizhen Qu · Robert Williamson -
2016 Poster: A scaled Bregman theorem with applications »
Richard Nock · Aditya Menon · Cheng Soon Ong -
2016 Poster: On Regularizing Rademacher Observation Losses »
Richard Nock -
2015 Workshop: Learning and privacy with incomplete data and weak supervision »
Giorgio Patrini · Tony Jebara · Richard Nock · Dimitrios Kotzias · Felix Xinnan Yu -
2015 Poster: Scalable Inference for Gaussian Process Models with Black-Box Likelihoods »
Amir Dezfouli · Edwin Bonilla -
2014 Poster: (Almost) No Label No Cry »
Giorgio Patrini · Richard Nock · Tiberio Caetano · Paul Rivera -
2014 Spotlight: (Almost) No Label No Cry »
Giorgio Patrini · Richard Nock · Tiberio Caetano · Paul Rivera -
2013 Workshop: Machine Learning Open Source Software: Towards Open Workflows »
Antti Honkela · Cheng Soon Ong -
2011 Poster: Contextual Gaussian Process Bandit Optimization »
Andreas Krause · Cheng Soon Ong -
2010 Workshop: New Directions in Multiple Kernel Learning »
Marius Kloft · Ulrich Rueckert · Cheng Soon Ong · Alain Rakotomamonjy · Soeren Sonnenburg · Francis Bach -
2010 Demonstration: mldata.org - machine learning data and benchmark »
Cheng Soon Ong -
2008 Workshop: Machine Learning Open Source Software »
Soeren Sonnenburg · Mikio L Braun · Cheng Soon Ong -
2008 Poster: On the Efficient Minimization of Classification Calibrated Surrogates »
Richard Nock · Frank NIELSEN -
2008 Spotlight: On the Efficient Minimization of Classification Calibrated Surrogates »
Richard Nock · Frank NIELSEN