Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Learning in the Presence of Strategic Behavior

Learning Against Non-Stationary Agents with Opponent Modeling & Deep Reinforcement Learning

Richard Everett


Abstract:

Humans, like all animals, both cooperate and compete with each other. Through these interactions we learn to observe, act, and manipulate to maximize our utility function, and continue doing so as others learn with us. This is a decentralized non-stationary learning problem, where to survive and flourish an agent must adapt to the gradual changes of other agents as they learn, as well as capitalize on sudden shifts in their behavior. To date, a majority of the work in deep multi-agent reinforcement learning has focused on only one of these types of adaptations. In this paper, we introduce the Switching Agent Model (SAM) as a way of dealing with both types of non-stationarity through the combination of opponent modeling and deep multi-agent reinforcement learning.

Richard Everett

Live content is unavailable. Log in and register to view live content