Timezone: »
The goal of the Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop is to disseminate relevant, parallel findings in the fields of computational neuroscience, psychology, and cognitive science that may inform modern machine learning methods.
In the past few years, machine learning methods—especially deep neural networks—have widely permeated the vision science, cognitive science, and neuroscience communities. As a result, scientific modeling in these fields has greatly benefited, producing a swath of potentially critical new insights into human learning and intelligence, which remains the gold standard for many tasks. However, the machine learning community has been largely unaware of these cross-disciplinary insights and analytical tools, which may help to solve many of the current problems that ML theorists and engineers face today (e.g., adversarial attacks, compression, continual learning, and unsupervised learning).
Thus we propose to invite leading cognitive scientists with strong computational backgrounds to disseminate their findings to the machine learning community with the hope of closing the loop by nourishing new ideas and creating cross-disciplinary collaborations.
See more information at the official conference website: https://www.svrhm2019.com/
Follow us on twitter for announcements: https://twitter.com/svrhm2019
Fri 8:50 a.m. - 9:00 a.m.
|
Opening Remarks
|
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths 🔗 |
Fri 9:00 a.m. - 9:25 a.m.
|
Predictable representations in humans and machines
(
Talk
)
Despite recent progress in artificial intelligence, humans and animals vastly surpass machine agents in their ability to quickly learn about their environment. While humans generalize to new concepts from small numbers of examples, state-of-the-art artificial neural networks still require huge amounts of supervision. We hypothesize that humans benefit from such data-efficiency because their internal representations support a much wider set tasks (such as planning and decision-making) which often require making predictions about future events. Using the curvature of natural videos as a measure of predictability, we find that human perceptual representations are indeed more predictable than their inputs, whereas current deep neural networks are not. Conversely, by optimizing neural networks for an information-theoretic measure of predictability, we arrive at artificial classifiers whose data-efficiency greatly surpasses that of purely supervised ones. Learning predictable representations may therefore enable artificial systems to perceive the world in a manner that is closer to biological ones. |
Olivier Henaff 🔗 |
Fri 9:25 a.m. - 9:50 a.m.
|
What is disentangling and does intelligence need it?
(
Talk
)
Despite the advances in modern deep learning approaches, we are still quite far from the generality, robustness and data efficiency of biological intelligence. In this talk I will suggest that this gap may be narrowed by re-focusing from implicit representation learning prevalent in end-to-end deep learning approaches to explicit unsupervised representation learning. In particular, I will discuss the value of disentangled visual representations acquired in an unsupervised manner loosely inspired by biological intelligence. In particular, this talk will connect disentangling with the ideas of symmetry transformations from physics to make a claim that disentangled representations reflect important world structure. I will then go over a few first demonstrations of how such representations can be useful in practice for continual learning, acquiring reinforcement learning (RL) policies that are more robust to transfer scenarios that standard RL approaches, and building abstract compositional visual concepts which make possible imagination of meaningful and diverse samples beyond the training data distribution. |
Irina Higgins 🔗 |
Fri 9:50 a.m. - 10:10 a.m.
|
Coffee Break
|
🔗 |
Fri 10:10 a.m. - 10:35 a.m.
|
A "distribution mismatch" dataset for comparing representational similarity in ANNs and the brain
(
Talk
)
A "distribution mismatch" dataset for comparing representational similarity in ANNs and the brain |
Wu Xiao 🔗 |
Fri 10:35 a.m. - 11:00 a.m.
|
Feathers, wings and the future of computer vision research
(
Talk
)
|
Bill Freeman 🔗 |
Fri 11:00 a.m. - 11:25 a.m.
|
Taxonomic structure in learning from few positive examples
(
Talk
)
|
Erin Grant 🔗 |
Fri 11:25 a.m. - 11:50 a.m.
|
CIFAR-10H: using human-derived soft-label distributions to support more robust and generalizable classification
(
Talk
)
The classification performance of deep neural networks has begun to asymptote at near-perfect levels on natural image benchmarks. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. Humans, by contrast, exhibit robust and graceful generalization far outside their set of training samples. In this talk, I will discuss one strategy for translating these properties to machine-learning classifiers: training them to be uncertain in the same way as humans, rather than always right. When we integrate human uncertainty into training paradigms by using human guess distributions as labels, we find the generalize better and are more robust to adversarial attacks. Rather than expect all image datasets to come with such labels, we instead intend our CIFAR10H dataset to be used as a gold standard, with which algorithmic means of capturing the same information can be evaluated. To illustrate this, I present one automated method that does so—deep prototype models inspired by the cognitive science literature. |
Ruairidh Battleday 🔗 |
Fri 11:50 a.m. - 12:15 p.m.
|
Making the next generation of machine learning datasets: ObjectNet a new object recognition benchmark
(
Talk
)
|
Andrei Barbu 🔗 |
Fri 12:15 p.m. - 12:40 p.m.
|
The building blocks of vision
(
Talk
)
|
Michael Tarr 🔗 |
Fri 2:00 p.m. - 3:00 p.m.
|
Poster Session
|
Ethan Harris · Tom White · Oh Hyeon Choung · Takashi Shinozaki · Dipan Pal · Katherine L. Hermann · Judy Borowski · Camilo Fosco · Chaz Firestone · Vijay Veerabadran · Benjamin Lahner · Chaitanya Ryali · Fenil Doshi · Pulkit Singh · Sharon Zhou · Michel Besserve · Michael Chang · Anelise Newman · Mahesan Niranjan · Jonathon Hare · Daniela Mihai · Marios Savvides · Simon Kornblith · Christina M Funke · Aude Oliva · Virginia de Sa · Dmitry Krotov · Colin Conwell · George Alvarez · Alex Kolchinski · Shengjia Zhao · Mitchell Gordon · Michael Bernstein · Stefano Ermon · Arash Mehrjou · Bernhard Schölkopf · John Co-Reyes · Michael Janner · Jiajun Wu · Josh Tenenbaum · Sergey Levine · Yalda Mohsenzadeh · Zhenglong Zhou
|
Fri 3:00 p.m. - 3:30 p.m.
|
Q&A from the Audience. Ask the Grad Students
(
Discussion Panel
)
"Cross-disciplinary research experiences and tips for Graduate School Admissions Panelists" Panelists: Erin Grant (UC Berkeley) Nadine Chang (CMU) Ruairidh Battleday (Princeton) Sophia Sanborn (UC Berkeley) Nikhil Parthasarathy (NYU) |
Erin Grant · Ruairidh Battleday · Sophia Sanborn · Nadine Chang · Nikhil Parthasarathy 🔗 |
Fri 3:30 p.m. - 3:55 p.m.
|
Object representation in the human visual system
(
Talk
)
|
Talia Konkle 🔗 |
Fri 3:55 p.m. - 4:20 p.m.
|
Cognitive computational neuroscience of vision
(
Talk
)
|
Nikolaus Kriegeskorte 🔗 |
Fri 4:20 p.m. - 4:45 p.m.
|
Perturbation-based remodeling of visual neural network representations
(
Talk
)
|
Matthias Bethge 🔗 |
Fri 4:45 p.m. - 5:10 p.m.
|
Local gain control and perceptual invariances
(
Talk
)
|
Eero Simoncelli 🔗 |
Fri 5:10 p.m. - 6:00 p.m.
|
Panel Discussion: What sorts of cognitive or biological (architectural) inductive biases will be crucial for developing effective artificial intelligence?
(
Discussion Panel
)
Panelists: Irina Higgins (DeepMind), Talia Konkle (Harvard), Nikolaus Kriegeskorte (Columbia), Matthias Bethge (Universität Tübingen) |
Irina Higgins · Talia Konkle · Matthias Bethge · Nikolaus Kriegeskorte 🔗 |
Fri 6:00 p.m. - 6:10 p.m.
|
Concluding Remarks & Prizes Ceremony
(
Concluding Remarks
)
Best Paper Award Prize (NVIDIA Titan RTX) and Best Poster Award Prize (Oculus Quest) |
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths 🔗 |
Fri 6:10 p.m. - 7:00 p.m.
|
Evening Reception
(
Reception
)
Sponsored by MIT Quest for Intelligence |
🔗 |
Author Information
Arturo Deza (Harvard University)
Joshua Peterson (Princeton University)
Apurva Ratan Murty (Massachusetts Institute of Technology)
Tom Griffiths (Princeton University)
More from the Same Authors
-
2021 : Meta-learning inductive biases of learning systems with Gaussian processes »
Michael Li · Erin Grant · Tom Griffiths -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : How to talk so AI will learn: instructions, descriptions, and pragmatics »
Theodore Sumers · Robert Hawkins · Mark Ho · Tom Griffiths · Dylan Hadfield-Menell -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 : On the informativeness of supervision signals »
Ilia Sucholutsky · Raja Marjieh · Tom Griffiths -
2023 Poster: Alignment with human representations supports robust few-shot learning »
Ilia Sucholutsky · Tom Griffiths -
2023 Poster: Tree of Thoughts: Deliberate Problem Solving with Large Language Models »
Shunyu Yao · Dian Yu · Jeffrey Zhao · Izhak Shafran · Tom Griffiths · Yuan Cao · Karthik Narasimhan -
2023 Poster: Im-Promptu: In-Context Composition from Image Prompts »
Bhishma Dedhia · Michael Chang · Jake Snell · Tom Griffiths · Niraj Jha -
2023 Poster: Gaussian Process Probes (GPP) for Uncertainty-Aware Probing »
Alexander Ku · Zi Wang · Jason Baldridge · Tom Griffiths · Been Kim -
2023 Oral: Tree of Thoughts: Deliberate Problem Solving with Large Language Models »
Shunyu Yao · Dian Yu · Jeffrey Zhao · Izhak Shafran · Tom Griffiths · Yuan Cao · Karthik Narasimhan -
2022 : On the informativeness of supervision signals »
Ilia Sucholutsky · Raja Marjieh · Tom Griffiths -
2022 : Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement »
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang -
2022 Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM) »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2022 Poster: Using natural language and program abstractions to instill human inductive biases in machines »
Sreejan Kumar · Carlos G. Correa · Ishita Dasgupta · Raja Marjieh · Michael Y Hu · Robert Hawkins · Jonathan D Cohen · nathaniel daw · Karthik Narasimhan · Tom Griffiths -
2022 Poster: How to talk so AI will learn: Instructions, descriptions, and autonomy »
Theodore Sumers · Robert Hawkins · Mark Ho · Tom Griffiths · Dylan Hadfield-Menell -
2022 Poster: Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation »
Michael Chang · Tom Griffiths · Sergey Levine -
2021 : Reinforcement learning: It's all in the mind »
Tom Griffiths -
2021 Workshop: Workshop on Human and Machine Decisions »
Daniel Reichman · Joshua Peterson · Kiran Tomlinson · Annie Liang · Tom Griffiths -
2021 : Opening remarks »
Tom Griffiths -
2021 : Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks »
Anne Harrington · Arturo Deza -
2021 : Evaluating the Adversarial Robustness of a Foveated Texture Transform Module in a CNN »
Jonathan Gant · Andrzej Banburski · Arturo Deza -
2021 : On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation »
Binxu Wang · David Mayo · Arturo Deza · Andrei Barbu · Colin Conwell -
2021 : Exploring the Structure of Human Adjective Representations »
Karan Grewal · Joshua Peterson · Bill Thompson · Tom Griffiths -
2021 : What Matters In Branch Specialization? Using a Toy Task to Make Predictions »
Chenguang Li · Arturo Deza -
2021 : Invited Talk 4 »
Tom Griffiths -
2021 Workshop: Shared Visual Representations in Human and Machine Intelligence »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2021 Oral: Passive attention in artificial neural networks predicts human visual selectivity »
Thomas Langlois · Haicheng Zhao · Erin Grant · Ishita Dasgupta · Tom Griffiths · Nori Jacoby -
2021 Poster: Passive attention in artificial neural networks predicts human visual selectivity »
Thomas Langlois · Haicheng Zhao · Erin Grant · Ishita Dasgupta · Tom Griffiths · Nori Jacoby -
2020 Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM) »
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths -
2019 : Concluding Remarks & Prizes Ceremony »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths -
2019 : Tom Griffiths »
Tom Griffiths -
2019 : Opening Remarks »
Arturo Deza · Joshua Peterson · Apurva Ratan Murty · Tom Griffiths -
2019 Poster: Reconciling meta-learning and continual learning with online mixtures of tasks »
Ghassen Jerfel · Erin Grant · Tom Griffiths · Katherine Heller -
2019 Spotlight: Reconciling meta-learning and continual learning with online mixtures of tasks »
Ghassen Jerfel · Erin Grant · Tom Griffiths · Katherine Heller -
2019 Poster: On the Utility of Learning about Humans for Human-AI Coordination »
Micah Carroll · Rohin Shah · Mark Ho · Tom Griffiths · Sanjit Seshia · Pieter Abbeel · Anca Dragan -
2017 : Break + Poster (1) »
Devendra Singh Chaplot · CHIH-YAO MA · Simon Brodeur · Eri Matsuo · Ichiro Kobayashi · Seitaro Shinagawa · Koichiro Yoshino · Yuhong Guo · Ben Murdoch · Kanthashree Mysore Sathyendra · Daniel Ricks · Haichao Zhang · Joshua Peterson · Li Zhang · Mircea Mironenco · Peter Anderson · Mark Johnson · Kang Min Yoo · Guntis Barzdins · Ahmed H Zaidi · Martin Andrews · Sam Witteveen · SUBBAREDDY OOTA · Prashanth Vijayaraghavan · Ke Wang · Yan Zhu · Renars Liepins · Max Quinn · Amit Raj · Vincent Cartillier · Eric Chu · Ethan Caballero · Fritz Obermeyer