Skip to yearly menu bar Skip to main content


Wordplay: Reinforcement and Language Learning in Text-based Games

Adam Trischler · Angeliki Lazaridou · Yonatan Bisk · Wendy Tay · Nate Kushman · Marc-Alexandre Côté · Alessandro Sordoni · Daniel Ricks · Tom Zahavy · Hal Daumé III

Room 512 ABEF

Video games, via interactive learning environments like ALE [Bellemare et al., 2013], have been fundamental to the development of reinforcement learning algorithms that work on raw video inputs rather than featurized representations. Recent work has shown that text-based games may present a similar opportunity to develop RL algorithms for natural language inputs [Narasimhan et al., 2015, Haroush et al., 2018]. Drawing on insights from both the RL and NLP communities, this workshop will explore this opportunity, considering synergies between text-based and video games as learning environments as well as important differences and pitfalls.

Video games provide infinite worlds of interaction and grounding defined by simple, physics-like dynamics. While it is difficult, if not impossible, to simulate the full and social dynamics of linguistic interaction (see, e.g., work on user simulation and dialogue [Georgila et al., 2006, El Asri et al., 2016]), text-based games nevertheless present complex, interactive simulations that ground language in world and action semantics. Games like Zork [Infocom, 1980] rose to prominence in the age before advanced computer graphics. They use simple language to describe the state of the environment and to report the effects of player actions. Players interact with the environment through text commands that respect a predefined grammar, which, though simplistic, must be discovered in each game. Through sequential decision making, language understanding, and language generation, players work toward goals that may or may not be specified explicitly, and earn rewards (points) at completion or along the way.

Text-based games present a broad spectrum of challenges for learning algorithms. In addition to language understanding, successful play generally requires long-term memory and planning, exploration/experimentation, affordance extraction [Fulda et al., 2017], and common sense. Text games also highlight major open challenges for RL: the action space (text) is combinatorial and compositional, while game states are partially observable, since text is often ambiguous or underspecific. Furthermore, in text games the set of actions that affect the state is not known in advance but must be learned through experimentation, typically informed by prior world/linguistic knowledge.

There has been a host of recent work towards solving text games [Narasimhan et al., 2015, Fulda et al., 2017, Kostka et al., 2017, Zhilin, et al., 2017, Haroush et al., 2018]. Nevertheless, commercial games like Zork remain beyond the capabilities of existing approaches. We argue that addressing even a subset of the aforementioned challenges would represent important progress in machine learning. Agents that solve text-based games may further learn functional properties of language; however, it is unclear what limitations the constraints and simplifications of text games (e.g., on linguistic diversity) impose on agents trained to solve them.

This workshop will highlight research that investigates existing or novel RL techniques for text-based settings, what agents that solve text-based games (might) learn about language, and more generally whether text-based games provide a good testbed for research at the intersection of RL and NLP. The program will feature a collection of invited talks alongside contributed posters and spotlight talks, curated by a committee with broad coverage of the RL and NLP communities. Panel discussions will highlight perspectives of influential researchers from both fields and encourage open dialogue. We will also pose a text-based game challenge several months in advance of the workshop (a similar competition is held annually at the IEEE Conference on Computational Intelligence and Games). This optional component will enable participants to design, train, and test agents in a carefully constructed, interactive text environment. The best-performing agent(s) will be recognized and discussed at the workshop. In addition to the exchange of ideas and the initiation of collaboration, an expected outcome is that text-based games emerge more prominently as a benchmark task to bridge RL and NLP research.

Relevant topics to be addressed at the workshop include (but are not limited to):
- RL in compositional, combinatorial action spaces
- Open RL problems that are especially pernicious in text-based games, like (sub)goal identification and efficient experimentation
- Grounded language understanding
- Online language acquisition
- Affordance extraction (on the fly)
- Language generation and evaluation in goal-oriented settings
- Automatic or crowdsourcing methods for linguistic diversity in simulations
- Use of language to constrain or index RL policies [Andreas et al., 2017]

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles


Log in and register to view live content