Extending NGU to Multi-Agent RL: A Preliminary Study
Juan Manuel Hernandez · Diego Fernández · Manuel Cifuentes · Denis Parra · Rodrigo Toro Icarte
Abstract
The Never Give Up (NGU) algorithm has proven effective in reinforcement learning tasks with sparse rewards by combining episodic novelty and intrinsic motivation. In this work, we extend NGU to multi-agent environments and evaluate its performance in the simple\_tag environment from the PettingZoo suite. Compared to a multi-agent DQN baseline, NGU achieves moderately higher returns and more stable learning dynamics. Building on this, we investigate three design choices: (1) shared replay buffer versus individual replay buffers, (2) sharing episodic novelty among agents using different $k$ thresholds, and (3) using heterogeneous values of the $\beta$ parameter. Our results show that NGU with a shared replay buffer yields the best performance and stability, highlighting that the gains come from combining NGU’s intrinsic exploration with experience sharing. Sharing novelty produces comparable performance when $k=1$, but degrades learning for larger $k$ values. Finally, heterogeneous $\beta$ values do not improve over a small common value. These findings suggest that NGU can be effectively applied in multi-agent settings when experiences are shared and intrinsic exploration signals are carefully tuned.
Successful Page Load