Poster
in
Affinity Event: Muslims in ML
A Closer Look at Sparse Training in Deep Reinforcement Learning
Muhammad Athar Ganaie · Vincent Michalski · Samira Ebrahimi Kahou · Yani Ioannou
Keywords: [ Sparse Training ] [ Deep Reinforcement Learning ] [ Pruning ] [ Reinforcement Learning ] [ Dynamic Sparse Training ]
Deep neural networks have enabled remarkable progress in reinforcement learning across a variety of domains, yet advancements in model architecture, especially involving sparse training, remain under-explored. Sparse architectures hold potential for reducing computational overhead in deep reinforcement learning (DRL), where prior studies suggest that parameter under-utilization may create opportunities for efficiency gains.This work investigates adaptation of sparse training methods from supervised learning to DRL, specifically examining pruning and the RigL algorithm in value-based agents like DQN.In our experiments across multiple Atari games, we study factors neglected in supervised sparse training which are of relevance to DRL, such as the impact of the bias parameter in high-sparsity regimes and the dynamics of dormant neurons under sparse conditions.The results reveal that RigL, despite its adaptability in supervised contexts, under-performs relative to pruning in DRL. Strikingly, removing bias parameters enhances RigL's performance, reduces dormant neurons and improves stability in high sparsity, while pruning suffers the opposite effect.These empirical observations underscore the imperative to re-evaluate sparse training methodologies, particularly within the context of DRL paradigms. The results elucidate the necessity for further investigation into the applicability of sparse training techniques across more expansive architectural frameworks and diverse environments.
Live content is unavailable. Log in and register to view live content