Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Reinforcement Learning for Real Life (RL4RealLife) Workshop

Reinforcement Learning Approaches for Traffic Signal Control under Missing Data

Hao Mei · Junxian Li · Bin Shi · Hua Wei


Abstract:

Traffic signal control is critical in improving transportation efficiency and alleviating traffic congestion. In recent years, the emergence of deep reinforcement learning (RL) methods in traffic signal control tasks has achieved better performance than conventional rule-based approaches. Most RL approaches require the observation of the environment for the agent to decide which action is optimal for a long-term reward. However, in real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors, which makes existing RL methods inapplicable on road networks with missing observation. In this work, we aim to control the traffic signals under the real-world setting, where some of the intersections in the road network are not installed with sensors and thus with no direct observations around them. Specifically, we propose and investigate two types of approaches: the first approach imputes the traffic states to enable adaptive control, while the second approach imputes both states and rewards to enable not only adaptive control but also the training of RL agents as well. Through extensive experiments on both synthetic and real-world road network traffic, we reveal that imputation can help the application of RL methods on intersections without observations, while the position of intersections without observation can largely influence the performance of RL agents.

Chat is not available.