Timezone: »

A (dis-)information theory of revealed and unrevealed preferences
Nitay Alon · Lion Schulz · Peter Dayan · Jeffrey S Rosenschein
Event URL: https://openreview.net/forum?id=vcpQW_fGaj5 »

In complex situations involving communication, agents might attempt to mask their intentions, essentially exploiting Shannon's theory of information as a theory of misinformation. Here, we introduce and analyze a simple multiagent reinforcement learning task where a buyer sends signals to a seller via its actions, and in which both agents are endowed with a recursive theory of mind. We show that this theory of mind, coupled with pure reward-maximization, gives rise to agents that selectively distort messages and become skeptical towards one another. Using information theory to analyze these interactions, we show how savvy buyers reduce mutual information between their preferences and actions, and how suspicious sellers learn to strategically reinterpret or discard buyers' signals.

Author Information

Nitay Alon (Hebrew University of Jerusalem, Max Planck Institute for Biological Cybernetics)
Lion Schulz (Max Planck Institute for Biological Cybernetics)
Lion Schulz

I am a PhD student at the Max Planck Institute For Biological Cybernetics where I am part of Peter Dayan's Department of Computational Neuroscience. I am currently visiting Rahul Bhui's lab at MIT as a Fulbright scholar. My research takes a computational approach to studying how biological and artificial agents come to trust themselves and others, combining methodology from machine learning, behavioral economics, and neuroscience. I am interested in both the basic computations underlying these processes as well as how they relate to real-world beliefs and behaviour, for example in mental disorders, polarisation, or misinformation.

Peter Dayan (Max Planck Institute for Biological Cybernetics)
Jeffrey S Rosenschein (The Hebrew University of Jerusalem)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors