Skip to yearly menu bar Skip to main content


Poster

Contrastive Modules with Temporal Attention for Multi-Task Reinforcement Learning

Siming Lan · Rui Zhang · Qi Yi · Jiaming Guo · Shaohui Peng · Yunkai Gao · Fan Wu · Ruizhi Chen · Zidong Du · Xing Hu · xishan zhang · Ling Li · Yunji Chen

Great Hall & Hall B1+B2 (level 1) #1416

Abstract:

In the field of multi-task reinforcement learning, the modular principle, which involves specializing functionalities into different modules and combining them appropriately, has been widely adopted as a promising approach to prevent the negative transfer problem that performance degradation due to conflicts between tasks. However, most of the existing multi-task RL methods only combine shared modules at the task level, ignoring that there may be conflicts within the task. In addition, these methods do not take into account that without constraints, some modules may learn similar functions, resulting in restricting the model's expressiveness and generalization capability of modular methods.In this paper, we propose the Contrastive Modules with Temporal Attention(CMTA) method to address these limitations. CMTA constrains the modules to be different from each other by contrastive learning and combining shared modules at a finer granularity than the task level with temporal attention, alleviating the negative transfer within the task and improving the generalization ability and the performance for multi-task RL.We conducted the experiment on Meta-World, a multi-task RL benchmark containing various robotics manipulation tasks. Experimental results show that CMTA outperforms learning each task individually for the first time and achieves substantial performance improvements over the baselines.

Chat is not available.