Timezone: »

Continuously Discovering Novel Strategies via Reward-Switching Policy Optimization
Zihan Zhou · Wei Fu · Bingliang Zhang · Yi Wu
Event URL: https://openreview.net/forum?id=2AJtG_ZIV2 »

We present Reward-Switching Policy Optimization (RSPO), a paradigm to dis-cover diverse strategies in complex RL environments by iteratively finding novelpolicies that are both locally optimal and sufficiently different from existing ones.To encourage the learning policy to consistently converge towards a previouslyundiscovered local optimum, RSPO switches between extrinsic and intrinsic re-wards via a trajectory-based novelty measurement during the optimization process.When a sampled trajectory is sufficiently distinct, RSPO performs standard policyoptimization with extrinsic rewards. For trajectories with high likelihood underexisting policies, RSPO utilizes an intrinsic diversity reward to promote exploration.Experiments show that RSPO is able to discover a wide spectrum of strategies in avariety of domains, ranging from single-agent particle-world tasks and MuJoCocontinuous control to multi-agent stag-hunt games and StarCraftII challenges.

Author Information

Zihan Zhou (Shanghai Qi Zhi Institute)
Wei Fu (Institute for Interdisciplinary Information Sciences, Tsinghua University, Tsinghua University)
Bingliang Zhang (Tsinghua University, Tsinghua University)
Yi Wu (OpenAI)

More from the Same Authors