Recent progress in artificial intelligence has transformed the way we live, work, and interact. Machines are mastering complex games and are learning increasingly challenging manipulation skills. Yet where are the robot agents that work for, with, and alongside us? These recent successes rely heavily on the ability to learn at scale, often within the confines of a virtual environment. This presents significant challenges for embodied systems acting and interacting in the real world. In contrast, we require our robots and algorithms to operate robustly in real-time, to learn from a limited amount of data, to take mission and sometimes safety-critical decisions, and increasingly even to display a knack for creative problem solving. Achieving this goal will require artificial agents to be able to assess - or introspect - their own competencies and their understanding of the world. Faced with similar complexity, there are a number of cognitive mechanisms which allow humans to act and interact successfully in the real world. Our ability to assess the quality of our own thinking - that is, our capacity for metacognition - plays a central role in this. We posit that recent advances in machine learning have, for the first time, enabled the effective implementation and exploitation of similar processes in artificial intelligence. This workshop brings together experts from psychology and cognitive science with cutting-edge research in machine learning, robotics, representation learning and related disciplines, with the ambitious aim of re-assessing how models of intelligence and metacognition can be leveraged in artificial agents given the potency of the toolset now available.
Mon 5:45 a.m. - 6:00 a.m.
|
Introduction to the Workshop on Metacognition in the Age of AI: Challenges and Opportunities
(
Live introduction from organizers
)
SlidesLive Video » |
Ingmar Posner · Steve Fleming · Francesca Rossi 🔗 |
Mon 6:00 a.m. - 6:30 a.m.
|
How does a brain compute confidence?
(
Invited Talk
)
SlidesLive Video » When we evaluate our sensory evidence to make decisions, we also evaluate its quality so that we can judge how like we are to make correct inferences about it — that is, we judge our perceptual confidence. This is something that we want our artificial systems to be able to do as well, of course. One might think that an optimal inference strategy would be the obvious choice for the nervous system to evaluate its own sensory noise. But is this what the brain is doing? And when we say ‘optimal’, are we making a correct guess at what the cost function ought to be? In this talk I’ll present some evidence to suggest both how we can go about answering these difficult questions, and that the answer might be that the brain is evaluating its own sensory noise in ways that might seem surprising. I’ll close with some implications that these findings may have for our design of intelligent artificial agents. |
Megan Peters 🔗 |
Mon 6:30 a.m. - 7:00 a.m.
|
Credit Assignment & Meta-Learning in a Single Lifelong Trial
(
Invited Talk
)
SlidesLive Video » Most current artificial reinforcement learning (RL) agents are trained under the assumption of repeatable trials, and are reset at the beginning of each trial. Humans, however, are never reset. Instead, they are allowed to discover computable patterns across trials, e.g.: in every third trial, go left to obtain reward, otherwise go right. General RL (sometimes called AGI) must assume a single lifelong trial which may or may not include identifiable sub-trials. General RL must also explicitly take into account that policy changes in early life may affect properties of later sub-trials and policy changes. In particular, General RL must take into account recursively that early meta-meta-learning is setting the stage for later meta-learning which is setting the stage for later learning etc. Most popular RL mechanisms, however, ignore such lifelong credit assignment chains. Exceptions are the success story algorithm (1990s), AIXI (2000s), and the mathematically optimal Gödel Machine (2003). |
Jürgen Schmidhuber 🔗 |
Mon 7:00 a.m. - 8:00 a.m.
|
Panel Discussion 1
(
Panel Discussion/Q&A
)
SlidesLive Video » |
Megan Peters · Jürgen Schmidhuber · Simona Ghetti · Nick Roy · Oiwi Parker Jones · Ingmar Posner 🔗 |
Mon 8:00 a.m. - 8:15 a.m.
|
Coffee Break
|
🔗 |
Mon 8:15 a.m. - 8:45 a.m.
|
Freespace Supports Metacognition for Navigation
(
Invited Talk
)
link »
SlidesLive Video » In a new environment, people identify, remember, and recognize where they can comfortably travel. This paper argues that a robot navigator too should learn and rely upon a mental model of unobstructed space. Extensive simulation of a controller for an industrial-strength robot demonstrates how metacognition applied to a model of unobstructed space resolves some engineering challenges and provides resilience in the face of others. The robot plans and learns quickly, considers alternative actions, takes novel shortcuts, and interrupts its own plans. |
Susan L Epstein 🔗 |
Mon 8:45 a.m. - 9:15 a.m.
|
Desiderata and ML Research Programme for Higher-Level Cognition
(
Invited Talk
)
SlidesLive Video » How can deep learning be extended to encompass the kind of high-level cognition and reasoning that humans enjoy and that seems to provide us with stronger out-of-distribution generalization than current state-of-the-art AI? Looking into neuroscience and cognitive science and translating these observations and theories into machine learning, we propose an initial set of inductive biases for representations, computations and probabilistic dependency structure. These strongly tie the notion of representation with that of actions, interventions and causality, possibly giving a key to stronger identifiability of latent causal structure and ensuing better sample complexity in and out of distribution, as well as meta-cognition abilities facilitating exploration that seeks to reduce epistemic uncertainty of the underlying causal understanding of the environment. |
Yoshua Bengio 🔗 |
Mon 9:15 a.m. - 10:15 a.m.
|
Panel Discussion 2
(
Panel Discussion/Q&A
)
SlidesLive Video » |
Susan L Epstein · Yoshua Bengio · Lucina Uddin · Rohan Paul · Steve Fleming 🔗 |
Mon 10:15 a.m. - 10:25 a.m.
|
Poster/Paper Spotlights
(
1 minute / 1 slide introductions to posters
)
|
Ezgi Korkmaz · Marianna Ganapini · Ruiqi He · Rylan Schaeffer · Kevin O'Neill 🔗 |
Mon 10:25 a.m. - 10:30 a.m.
|
Grab Lunch
|
🔗 |
Mon 10:30 a.m. - 11:30 a.m.
|
Poster Session link » | 🔗 |
Mon 11:30 a.m. - 12:00 p.m.
|
Break
|
🔗 |
Mon 12:00 p.m. - 12:30 p.m.
|
Performance-Optimized Neural Networks as an Explanatory Framework for Decision Confidence
(
Invited Talk
)
SlidesLive Video » Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct, leading to debate over whether these predictions are optimal, and whether they rely on the same decision variable as decisions themselves. This work has generally relied on idealized, low-dimensional modeling frameworks, such as signal detection theory or Bayesian inference, leaving open the question of how decision confidence operates in the domain of high-dimensional, naturalistic stimuli. To address this, we developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images. The model naturally accounts for a number of puzzling dissociations between decisions and confidence, suggests a novel explanation of these dissociations in terms of optimization for the statistics of sensory inputs, and makes the surprising prediction that, despite these dissociations, decisions and confidence depend on a common decision variable. |
Taylor Webb · Hakwan Lau 🔗 |
Mon 12:30 p.m. - 1:00 p.m.
|
Causal World Models
(
Invited Talk
)
SlidesLive Video » |
Bernhard Schölkopf 🔗 |
Mon 1:00 p.m. - 2:00 p.m.
|
Panel Discussion 3
(
Panel Discussion/Q&A
)
SlidesLive Video » |
Taylor Webb · Hakwan Lau · Bernhard Schölkopf · Jiangying Zhou · Lior Horesh · Francesca Rossi 🔗 |
Mon 2:00 p.m. - 2:30 p.m.
|
Closing Remarks
(
Capstone to the day, and thanks, from the organizers
)
SlidesLive Video » |
🔗 |
-
|
An Algorithmic Theory of Metacognition in Minds and Machines
(
Poster
)
link »
Humans sometimes choose actions that they themselves can identify as sub-optimal, or wrong, even in the absence of additional information. How is this possible? We present an algorithmic theory of metacognition based on a well-understood trade-off in reinforcement learning (RL) between value-based RL and policy-based RL. To the cognitive (neuro)science community, our theory answers the outstanding question of why information can be used for error detection but not for action selection. To the machine learning community, our proposed theory creates a novel interaction between the Actor and Critic in Actor-Critic agents and notes a novel connection between RL and Bayesian Optimization. We call our proposed agent the \textbf{Metacognitive Actor Critic (MAC)}. We conclude with showing how to create metacognition in machines by implementing a deep MAC and showing that it can detect (some of) its own suboptimal actions without external information or delay. |
Rylan Schaeffer 🔗 |
-
|
Promoting Metacognitive Learning through Systematic Reflection
(
Poster
)
link »
People are able to learn clever cognitive strategies through trial and error from small amounts of experience. This is facilitated by people's ability to reflect on their own thinking which is known as metacognition. To examine the effects of deliberate systematic metacognitive reflection on how people learn how to plan, the experimental group was guided to systematically reflect on their decision-making process after every third decision. We found that participants assisted by reflection prompts learned to plan better faster. Moreover, we found that reflection led to immediate improvements in the participants' planning strategies. Our preliminary results do suggest that deliberate metacognitive reflection can help people discover clever cognitive strategies from very small amounts of experience. Understanding the role of reflection in human learning is a promising approach for making reinforcement learning more sample efficient in both humans and machines. |
Frederic Becker 🔗 |
-
|
Meta Dynamic Programming
(
Poster
)
link »
To accelerate the pace at which they acquire new information, reinforcement learning algorithms can select which data to use first for training. In this paper, we outline a general methodology to perform this selection process, hinting at a generation of agents which deeply think about their current and future learning state while selecting their training data. In the context of prioritization methods for asynchronous dynamic programming, we propose a meta-level technique for state selection. We show that the method, called meta dynamic programming, together with its approximations, can provide promising performance improvements while being grounded on a theoretically sound metacognitive formalization. |
Pierluca D'Oro 🔗 |
-
|
Non-Robust Feature Mapping in Deep Reinforcement Learning
(
Poster
)
link »
Adversarial perturbations to state observations can dramatically degrade the performance of deep reinforcement learning policies, and thus raise concerns regarding the robustness of deep reinforcement learning agents. A sizeable body of work has focused on addressing the robustness problem in deep reinforcement learning, and there are several recent proposals for adversarial training methods in the deep reinforcement learning domain. In our work we focus on the robustness of state-of-the-art adversarially trained deep reinforcement learning policies and vanilla trained deep reinforcement learning polices. We propose two novel algorithms to map non-robust features in deep reinforcement learning policies. We conduct several experiments in the Arcade Learning Environment (ALE), and with our proposed feature mapping algorithms we show that while the state-of-the-art adversarial training method eliminates a certain set of non-robust features, a new set of non-robust features more intrinsic to the adversarial training are created. Our results lay out concerns that arise when using existing state-of-the-art adversarial training methods, and we believe our proposed feature mapping algorithm can aid in the process of building more robust deep reinforcement learning policies. |
Ezgi Korkmaz 🔗 |
-
|
Have I done enough planning or should I plan more?
(
Poster
)
link »
People's decisions about how to allocate their limited computational resources are essential to human intelligence. An important component of this metacognitive ability is deciding whether to continue thinking about what to do and move on to the next decision. Here, we show that people acquire this ability through learning and reverse-engineer the underlying learning mechanisms. Using a process-tracing paradigm that externalises human planning, we find that people quickly adapt how much planning they perform to the cost and benefit of planning. To discover the underlying metacognitive learning mechanisms we augmented a set of reinforcement learning models with metacognitive features and performed Bayesian model selection. Our results suggest that the metacognitive ability to adjust the amount of planning might be learned through a policy-gradient mechanism that is guided by metacognitive pseudo-rewards that communicate the value of planning. |
Ruiqi He 🔗 |
-
|
Thinking Fast and Slow in AI: The Role of Metacognition
(
Poster
)
link »
AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. Moreover, while these successes can be accredited to improved algorithms and techniques, they are also tightly linked to the availability of huge datasets and computational power. State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of (human) intelligence. We argue that a better study of the mechanisms that allow humans to have these capabilities can help us understand how to imbue AI systems with these competencies. We focus especially on D. Kahneman's theory of thinking fast and slow, and we propose a multi-agent AI architecture where incoming problems are solved by either system 1 (or "fast") agents, that react by exploiting only past experience, or by system 2 (or "slow") agents, that are deliberately activated when there is the need to reason and search for optimal solutions beyond what is expected from the system 1 agent. Both kinds of agents are supported by a model of the world, containing domain knowledge about the environment, and a model of ``self'', containing information about past actions of the system and solvers' skills. |
Marianna Ganapini 🔗 |
-
|
Measuring and Modeling Confidence in Human Causal Judgment
(
Poster
)
link »
The human capacity for causal judgment has long been thought to depend on an ability to consider counterfactual alternatives: the lightning strike caused the forest fire because had it not struck, the forest fire would not have ensued. To accommodate psychological effects on causal judgment, a range of recent accounts of causal judgment have proposed that people probabilistically sample counterfactual alternatives from which they compute a graded index of causal strength. While such models have had success in describing the influence of probability on causal judgments, among other effects, we show that these models make further untested predictions: probability should also influence people's metacognitive confidence in their causal judgments. In a large (N=3020) sample of participants in a causal judgment task, we found evidence that normality indeed influences people's confidence in their causal judgments and that these influences were predicted by a counterfactual sampling model. We take this result as supporting evidence for existing Bayesian accounts of causal judgment. |
Kevin O'Neill 🔗 |