Skip to yearly menu bar Skip to main content


Poster
in
Workshop: LaReL: Language and Reinforcement Learning

SCERL: A Benchmark for intersecting language and safe reinforcement learning

Lan Hoang · Shivam Ratnakar · Nicolas Galichet · Akifumi Wachi · Keerthiram Murugesan · Songtao Lu · Mattia Atzeni · Michael Katz · Subhajit Chaudhury

Keywords: [ text-based reinforcement learning ] [ safety ] [ constraints ]


Abstract:

The issue of safety and robustness is a critical focus for AI research. Two lines of research are so far distinct, namely (i) safe reinforcement learning, where an agent needs to interact with the world under safety constraints, and (ii) textual reinforcement learning, where agents need to perform robust reasoning and modelling of the state of the environment. In this paper, we propose Safety-Constrained Environments for Reinforcement Learning (SCERL), a benchmark to bridge the gap between these two research directions. The contribution of this benchmark is safety-relevant environments with i) a sample set of 20 games built on new logical rules to represent physical safety issues; ii) added monitoring of safety violations and iii) a mechanism to further generate a more diverse set of games with safety constraints and their corresponding metrics of safety types and difficulties. This paper shows selected baseline results on the benchmark. Our aim is for the SCERL benchmark and its flexible framework to provide a set of tasks to demonstrate language-based safety challenges to inspire the research community to further explore safety applications in a text-based domain.

Chat is not available.