Timezone: »
Emergent communication research often focuses on optimizing task-specific utility as a driver for communication. However, human languages appear to evolve under pressure to efficiently compress meanings into communication signals by optimizing the Information Bottleneck tradeoff between informativeness and complexity. In this work, we study how trading off these three factors — utility, informativeness, and complexity — shapes emergent communication, including compared to human communication. To this end, we propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to compress inputs into discrete signals embedded in a continuous space. We train agents via VQ-VIB and compare their performance to previously proposed neural architectures in grounded environments and in a Lewis reference game. Across all neural architectures and settings, taking into account communicative informativeness benefits communication convergence rates, and penalizing communicative complexity leads to human-like lexicon sizes while maintaining high utility. Additionally, we find that VQ-VIB outperforms other discrete communication methods. This work demonstrates how fundamental principles that are believed to characterize human language evolution may inform emergent communication in artificial agents.
Author Information
Mycal Tucker (Massachusetts Institute of Technology)
Julie A Shah (MIT)
Roger Levy (Massachusetts Institute of Technology)
Noga Zaslavsky (MIT)
More from the Same Authors
-
2021 : Self-supervised pragmatic reasoning »
Jennifer Hu · Roger Levy · Noga Zaslavsky -
2022 : Towards True Lossless Sparse Communication in Multi-Agent Systems »
Seth Karten · Mycal Tucker · Siva Kailas · Katia Sycara -
2022 : Fast Adaptation via Human Diagnosis of Task Distribution Shift »
Andi Peng · Mark Ho · Aviv Netanyahu · Julie A Shah · Pulkit Agrawal -
2022 : Temporal Logic Imitation: Learning Plan-Satisficing Motion Policies from Demonstrations »
Felix Yanwei Wang · Nadia Figueroa · Shen Li · Ankit Shah · Julie A Shah -
2022 : Aligning Robot Representations with Humans »
Andreea Bobu · Andi Peng · Pulkit Agrawal · Julie A Shah · Anca Dragan -
2022 : Generalization and Translatability in Emergent Communication via Informational Constraints »
Mycal Tucker · Roger Levy · Julie A Shah · Noga Zaslavsky -
2022 : Generalization and Translatability in Emergent Communication via Informational Constraints »
Mycal Tucker · Roger Levy · Julie A Shah · Noga Zaslavsky -
2022 Workshop: Information-Theoretic Principles in Cognitive Systems »
Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman -
2022 : Opening Remarks »
Noga Zaslavsky -
2022 Poster: Trading off Utility, Informativeness, and Complexity in Emergent Communication »
Mycal Tucker · Roger Levy · Julie Shah · Noga Zaslavsky -
2021 : [O5] Do Feature Attribution Methods Correctly Attribute Features? »
Yilun Zhou · Serena Booth · Marco Tulio Ribeiro · Julie A Shah -
2021 Workshop: Meaning in Context: Pragmatic Communication in Humans and Machines »
Jennifer Hu · Noga Zaslavsky · Aida Nematzadeh · Michael Franke · Roger Levy · Noah Goodman -
2021 : Opening remarks »
Jennifer Hu · Noga Zaslavsky · Aida Nematzadeh · Michael Franke · Roger Levy · Noah Goodman -
2021 Poster: Grammar-Based Grounded Lexicon Learning »
Jiayuan Mao · Freda Shi · Jiajun Wu · Roger Levy · Josh Tenenbaum -
2021 Poster: Emergent Discrete Communication in Semantic Spaces »
Mycal Tucker · Huao Li · Siddharth Agrawal · Dana Hughes · Katia Sycara · Michael Lewis · Julie A Shah -
2019 : Panel Discussion »
Jacob Andreas · Edward Gibson · Stefan Lee · Noga Zaslavsky · Jason Eisner · Jürgen Schmidhuber -
2019 : Invited Talk - 2 »
Noga Zaslavsky -
2018 Poster: Bayesian Inference of Temporal Task Specifications from Demonstrations »
Ankit Shah · Pritish Kamath · Julie A Shah · Shen Li -
2017 : Efficient human-like semantic representations via the information bottleneck principle »
Noga Zaslavsky -
2016 Workshop: The Future of Interactive Machine Learning »
Kory Mathewson @korymath · Kaushik Subramanian · Mark Ho · Robert Loftin · Joseph L Austerweil · Anna Harutyunyan · Doina Precup · Layla El Asri · Matthew Gombolay · Jerry Zhu · Sonia Chernova · Charles Isbell · Patrick M Pilarski · Weng-Keen Wong · Manuela Veloso · Julie A Shah · Matthew Taylor · Brenna Argall · Michael Littman -
2015 Poster: Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction »
Been Kim · Julie A Shah · Finale Doshi-Velez -
2014 Poster: Fairness in Multi-Agent Sequential Decision-Making »
Chongjie Zhang · Julie A Shah -
2014 Poster: The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification »
Been Kim · Cynthia Rudin · Julie A Shah