Workshop
Cognitively Informed Artificial Intelligence: Insights From Natural Intelligence
Michael Mozer · Brenden Lake · Angela Yu
Sat 9 Dec, 8 a.m. PST
The goal of this workshop is to bring together cognitive scientists, neuroscientists, and AI researchers to discuss opportunities for improving machine learning by leveraging our scientific understanding of human perception and cognition. There is a history of making these connections: artificial neural networks were originally motivated by the massively parallel, deep architecture of the brain; considerations of biological plausibility have driven the development of learning procedures; and architectures for computer vision draw parallels to the connectivity and physiology of mammalian visual cortex. However, beyond these celebrated examples, cognitive science and neuroscience has fallen short of its potential to influence the next generation of AI systems. Areas such as memory, attention, and development have rich theoretical and experimental histories, yet these concepts, as applied to AI systems so far, only bear a superficial resemblance to their biological counterparts.
The premise of this workshop is that there are valuable data and models from cognitive science that can inform the development of intelligent adaptive machines, and can endow learning architectures with the strength and flexibility of the human cognitive architecture. The structures and mechanisms of the mind and brain can provide the sort of strong inductive bias needed for machine-learning systems to attain human-like performance. We conjecture that this inductive bias will become more important as researchers move from domain-specific tasks such as object and speech recognition toward tackling general intelligence and the human-like ability to dynamically reconfigure cognition in service of changing goals. For ML researchers, the workshop will provide access to a wealth of data and concepts situated in the context of contemporary ML. For cognitive scientists, the workshop will suggest research questions that are of critical interest to ML researchers.
The workshop will focus on three interconnected topics of particular relevance to ML:
(1) Learning and development. Cognitive capabilities expressed early in a child’s development are likely to be crucial for bootstrapping adult learning and intelligence. Intuitive physics and intuitive psychology allow the developing organism to build an understanding of the world and of other agents. Additionally, children and adults often demonstrate “learning-to-learn,” where previous concepts and skills form a compositional basis for learning new concepts and skills.
(2) Memory. Human memory operates on multiple time scales, from memories that literally persist for the blink of an eye to those that persist for a lifetime. These different forms of memory serve different computational purposes. Although forgetting is typically thought of as a disadvantage, the ability to selectively forget/override irrelevant knowledge in nonstationary environments is highly desirable.
(3) Attention and Decision Making. These refer to relatively high-level cognitive functions that allow task demands to purposefully control an agent’s external environment and sensory data stream, dynamically reconfigure internal representation and architecture, and devise action plans that strategically trade off multiple, oft-conflicting behavioral objectives.
The long-term aims of this workshop are:
* to promote work that incorporates insights from human cognition to suggest novel and improved AI architectures;
* to facilitate the development of ML methods that can better predict human behavior; and
* to support the development of a field of ‘cognitive computing’ that is more than a marketing slogan一a field that improves on both natural and artificial cognition by synergistically advancing each and integrating their strengths in complementary manners.
Schedule
Sat 8:30 a.m. - 8:40 a.m.
|
Workshop overview
(
talk
)
>
|
Michael Mozer · Angela Yu · Brenden Lake 🔗 |
Sat 8:40 a.m. - 9:05 a.m.
|
Cognitive AI
(
talk
)
>
|
Brenden Lake 🔗 |
Sat 9:05 a.m. - 9:30 a.m.
|
Computational modeling of human face processing
(
talk
)
>
|
Angela Yu 🔗 |
Sat 9:30 a.m. - 9:55 a.m.
|
People infer object shape in a 3D, object-centered coordinate system
(
talk
)
>
|
Robert A Jacobs 🔗 |
Sat 9:55 a.m. - 10:10 a.m.
|
Relational neural expectation maximization
(
talk
)
>
|
Sjoerd van Steenkiste 🔗 |
Sat 10:10 a.m. - 10:15 a.m.
|
Contextual dependence of human preference for complex objects: A Bayesian statistical account
(
spotlight
)
>
|
Chaitanya Ryali 🔗 |
Sat 10:15 a.m. - 10:20 a.m.
|
A biologically-inspired sparse, topographic recurrent neural network model for robust change detection
(
spotlight
)
>
|
Devarajan Sridharan 🔗 |
Sat 10:20 a.m. - 10:25 a.m.
|
Visual attention guided deep imitation learning
(
spotlight
)
>
|
Ruohan Zhang 🔗 |
Sat 10:25 a.m. - 10:30 a.m.
|
Human learning of video games
(
spotlight
)
>
|
Pedro Tsividis 🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
COFFEE BREAK AND POSTER SESSION
|
🔗 |
Sat 11:00 a.m. - 11:25 a.m.
|
Life history and learning: Extended human childhood as a way to resolve explore/exploit trade-offs and improve hypothesis search
(
talk
)
>
|
Alison Gopnik 🔗 |
Sat 11:25 a.m. - 11:50 a.m.
|
Meta-reinforcement learning in brains and machines
(
talk
)
>
|
Matt Botvinick 🔗 |
Sat 11:50 a.m. - 12:15 p.m.
|
Revealing human inductive biases and metacognitive processes with rational models
(
talk
)
>
|
Tom Griffiths 🔗 |
Sat 12:15 p.m. - 12:30 p.m.
|
Learning to select computations
(
talk
)
>
|
Falk Lieder · Fred Callaway · Sayan Gul · Paul Krueger 🔗 |
Sat 2:00 p.m. - 2:25 p.m.
|
From deep learning of disentangled representations to higher-level cognition
(
talk
)
>
|
Yoshua Bengio 🔗 |
Sat 2:25 p.m. - 2:50 p.m.
|
Access consciousness and the construction of actionable representations
(
talk
)
>
|
Michael C Mozer 🔗 |
Sat 2:50 p.m. - 3:05 p.m.
|
Evaluating the capacity to reason about beliefs
(
talk
)
>
|
Aida Nematzadeh 🔗 |
Sat 3:05 p.m. - 3:30 p.m.
|
COFFEE BREAK AND POSTER SESSION II
|
🔗 |
Sat 3:30 p.m. - 3:55 p.m.
|
Mapping the spatio-temporal dynamics of cognition in the human brain
(
talk
)
>
|
Aude Oliva 🔗 |
Sat 3:55 p.m. - 4:20 p.m.
|
Scale-invariant temporal memory in AI
(
talk
)
>
|
Marc Howard 🔗 |
Sat 4:20 p.m. - 4:35 p.m.
|
Scale-invariant temporal history (SITH): Optimal slicing of the past in an uncertain world
(
talk
)
>
|
Tyler Spears · Brandon Jacques · Marc Howard · Per B Sederberg 🔗 |
Sat 4:35 p.m. - 4:40 p.m.
|
Efficient human-like semantic representations via the information bottleneck principle
(
spotlight
)
>
|
Noga Zaslavsky 🔗 |
Sat 4:40 p.m. - 4:45 p.m.
|
The mutation sampler: A sampling approach to causal representation
(
spotlight
)
>
|
Zachary Davis 🔗 |
Sat 4:45 p.m. - 4:50 p.m.
|
Generating more human-like recommendations with a cognitive model of generalization
(
spotlight
)
>
|
David Bourgin 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Sample-efficient reinforcement learning through transfer and architectural priors
(
poster
)
>
|
Benjamin Spector 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Variational probability flow for biologically plausible training of deep neural networks
(
poster
)
>
|
ZUOZHU LIU · Shaowei Lin 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Curiosity-driven reinforcement learning with hoemostatic regulation
(
poster
)
>
|
Ildefons Magrans de Abril 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Context-modulation of hippocampal dynamics and deep convolutional networks
(
poster
)
>
|
Brad Aimone 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Cognitive modeling and the wisdom of the crowd
(
poster
)
>
|
Michael D Lee 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Concept acquisition through meta-learning
(
poster
)
>
|
Erin Grant 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Pre-training attentional mechanisms
(
poster
)
>
|
Jack Lindsey 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Using STDP for unsupervised, event-based online learning
(
poster
)
>
|
Johannes Thiele 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Learning to organize knowledge with N-gram machines
(
poster
)
>
|
Fan Yang 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Power-law temporal discounting over a logarithmically compressed timeline for scale invariant reinforcement learning
(
poster
)
>
|
Zoran Tiganj 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Improving transfer using augmented feedback in progressive neural networks
(
poster
)
>
|
Deepika Bablani · Parth Chadha 🔗 |
Sat 4:50 p.m. - 5:25 p.m.
|
POSTER: Question asking as program generation
(
poster
)
>
|
Anselm Rothe 🔗 |
Sat 5:25 p.m. - 5:50 p.m.
|
Object-oriented intelligence
(
talk
)
>
|
Peter Battaglia 🔗 |
Sat 5:50 p.m. - 6:15 p.m.
|
Representational primitives, in minds and machines
(
talk
)
>
|
Gary Marcus 🔗 |