Skip to yearly menu bar Skip to main content


Workshop

Information-Theoretic Principles in Cognitive Systems

Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman

Room 357

Many cognitive and neural systems can be described in terms of compression and transmission of information given bounded resources. While information theory, as a principled mathematical framework for characterizing such systems, has been widely applied in neuroscience and machine learning, its role in understanding cognition has traditionally been contested. This traditional view has been changing in recent years, with growing evidence that information-theoretic optimality principles underlie a wide range of cognitive functions, including perception, working memory, language, and decision making. In parallel, there has also been a surge of contemporary information-theoretic approaches in machine learning, enabling large-scale neural-network implementation of information-theoretic models.

These scientific and technological developments open up new avenues for progress toward an integrative computational theory of human and artificial cognition, by leveraging information-theoretic principles as bridges between various cognitive functions and neural representations. This workshop aims to explore these new research directions and bring together researchers from machine learning, cognitive science, neuroscience, linguistics, economics, and potentially other fields, who are interested in integrating information-theoretic approaches that have thus far been studied largely independently of each other. In particular, we aim to discuss questions and exchange ideas along the following directions:

- Understanding human cognition: To what extent can information theoretic principles advance the understanding of human cognition and its emergence from neural systems? What are the key challenges for future research in information theory and cognition? How might tools from machine learning help overcome these challenges? Addressing such questions could lead to progress in computational models that integrate multiple cognitive functions and cross Marr’s levels of analysis.

- Improving AI agents and human-AI cooperation: Given empirical evidence that information theoretic principles may underlie a range of human cognitive functions, how can such principles guide artificial agents toward human-like cognition? How might these principles facilitate human-AI communication and cooperation? Can this help agents learn faster with less data? Addressing such questions could lead to progress in developing better human-like AI systems.

Chat is not available.
Timezone: America/Los_Angeles

Schedule