Reinforcement learning (RL) algorithms learn through rewards and a process of trial-and-error. This approach is strongly inspired by the study of animal behaviour and has led to outstanding achievements. However, artificial agents still struggle with a number of difficulties, such as learning in changing environments and over longer timescales, states abstractions, generalizing and transferring knowledge. Biological agents, on the other hand, excel at these tasks. The first edition of our workshop last year brought together leading and emerging researchers from Neuroscience, Psychology and Machine Learning to share how neural and cognitive mechanisms can provide insights for RL research and how machine learning advances can further our understanding of brain and behaviour. This year, we want to build on the success of our previous workshop, by expanding on the challenges that emerged and extending to novel perspectives. The problem of state and action representation and abstraction emerged quite strongly last year, so this year’s program aims to add new perspectives like hierarchical reinforcement learning, structure learning and their biological underpinnings. Additionally, we will address learning over long timescales, such as lifelong learning or continual learning, by including views from synaptic plasticity and developmental neuroscience. We are hoping to inspire and further develop connections between biological and artificial reinforcement learning by bringing together experts from all sides and encourage discussions that could help foster novel solutions for both communities.
Organizers Opening Remarks (Live Intro) | |
Speaker Introduction: Shakir Mohamed (Live Intro) | |
Invited Talk #1 Shakir Mohamed : Pain and Machine Learning (Invited Talk) | |
Invited talk 1 QnA: Shakir Mohamed (Live QnA) | |
Speaker Introduction: Claudia Clopath (Live Intro) | |
Invited Talk #2 Claudia Clopath (Live, no recording) - Continual learning with different timescales. (Invited Live Talk) | |
Invited Talk #2 QnA - Claudia Clopath (Live, no recording) (Live QnA) | |
Speaker Introduction: Contributed talk#1 (Live Intro) | |
Contributed Talk #1: Learning multi-dimensional rules with probabilistic feedback via value-based serial hypothesis testing (Contributed Talk) | |
Speaker Introduction: Contributed talk#2 (Live Intro) | |
Contributed Talk #2: Evaluating Agents Without Rewards (Contributed Talk) | |
Coffee Break | |
Speaker Introduction: Kim Stachenfeld (Live Intro) | |
Invited Talk #3 Kim Stachenfeld : Structure Learning and the Hippocampal-Entorhinal Circuit (Invited Talk) | |
Invited Talk #3 QnA - Kim Stachenfeld (Live QnA) | |
Speaker Introduction: George Konidaris (Live Intro) | |
Invited Talk #4 George Konidaris - Signal to Symbol (via Skills) (Invited Talk) | |
Invited Talk #4 QnA - George Konidaris (Live QnA) | |
Coffee Break | |
Panel Discussions | |
Break & Poster Session on Gather.Town (Main) (Poster Session) | |
Speaker Introduction: Ishita Dasgupta (Live Intro) | |
Invited Talk #5 Ishita Dasgupta - Embedding structure in data: Progress and challenges for the meta-learning approach (Invited Talk) | |
Invited Talk #5 QnA - Ishita Dasgupta (Live QnA) | |
Speaker Introduction: Catherine Hartley (Live Intro) | |
Invited Talk #6 Catherine Hartley - Developmental tuning of action selection (Invited Talk) | |
Invited Talk #6 QnA - Catherine Hartley (Live QnA) | |
Coffee Break (Break) | |
Speaker Introduction: Contributed talk#3 speaker (Live Intro) | |
Contributed Talk #3: Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning (Contributed Talk) | |
Speaker Introduction: Yael Niv (Live Intro) | |
Invited Talk #7 Yael Niv - Latent causes, prediction errors and the organization of memory (Invited Talk) | |
Invited Talk #7 QnA - Yael Niv (Live QnA) | |
Closing remarks (Live Closing remarks) | |
Social & Poster Session on Gather.Town (Poster Session) | |