Timezone: »

 
An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks
Yuxiang Wu · Yu Zhao · Baotian Hu · Pasquale Minervini · Pontus Lars Erik Saito Stenetorp · Sebastian Riedel

Access to external knowledge is essential for many natural language processing tasks, such as question answering and dialogue. Existing methods often rely on a parametric model that stores knowledge in its parameters, or use a retrieval-augmented model that has access to an external knowledge source. Parametric and retrieval-augmented models have complementary strengths in terms of computational efficiency and predictive accuracy. To combine the strength of both approaches, we propose the Efficient Memory-Augmented Transformer (EMAT) – it encodes external knowledge into a key-value memory and exploits the fast maximum inner product search for memory querying. Experiments on various knowledge-intensive tasks such as question answering and dialogue datasets show that, simply augmenting parametric models (T5-base) using our method produces more accurate results while retaining a high throughput. Compared to retrieval-augmented models, EMAT runs substantially faster across the board and produces more accurate results on WoW and ELI5.

Author Information

Yuxiang Wu (University College London)
Yu Zhao (Harbin Institute of Technology, Shenzhen)
Baotian Hu (Harbin Institute of Technology, Shenzhen)
Pasquale Minervini (University College London)
Pontus Lars Erik Saito Stenetorp (University of Tokyo)
Sebastian Riedel (UCL)

More from the Same Authors