Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Offline Reinforcement Learning

Discrete Uncertainty Quantification Approach for Offline RL

Javier Corrochano · Rubén Majadas · FERNANDO FERNANDEZ


Abstract:

In many Reinforcement Learning tasks, the classical online interaction of the learning agent with the environment is impractical, either because such interaction is expensive or dangerous. In these cases, previous gathered data can be used, arising what is typically called Offline Reinforcement Learning. However, this type of learning faces a large number of challenges, mostly derived from the fact that exploration/exploitation trade-off is overshadowed. Instead, the historical data is usually biased by the way it was obtained, typically, a sub-optimal controller, producing a distributional shift from historical data and the one required to learn the optimal policy.

Chat is not available.