Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Memory in Artificial and Real Intelligence (MemARI)

Characterizing Verbatim Short-Term Memory in Neural Language Models

Kristijan Armeni · Christopher J Honey · Tal Linzen

Keywords: [ short-term memory ] [ transformer ] [ language model ] [ LSTM ] [ surprisal ]


Abstract:

When a language model is trained to predict natural language sequences, its prediction at each moment depends on a representation of prior context. What kind of information about the prior context can language models retrieve? We tested whether language models could retrieve the exact words that occurred previously in a text. In our paradigm, language models (transformers and LSTMs) processed English text in which a list of nouns occurred twice. We operationalized memory retrieval as the reduction in surprisal from the first to the second list. We found that the transformers retrieved both the identity and ordering of nouns from the first list. Further, the transformers' retrieval was markedly enhanced when they were trained on a larger corpus and with greater model depth. Lastly, their ability to index prior tokens was dependent on learned attention patterns. In contrast, LSTMs exhibited less precise retrieval, which was limited to list-initial tokens and to short intervening texts. The LSTMs' retrieval was not sensitive to the order of nouns and it improved when the list was semantically coherent. We conclude that large transformer-style language models implement something akin to a working memory system that can flexibly retrieve individual token representations across arbitrary delays; conversely, conventional LSTMs maintain a coarser semantic gist of prior tokens, weighted toward the earliest items.

Chat is not available.