Skip to yearly menu bar Skip to main content


Keynote Talk
in
Workshop: Efficient Natural Language and Speech Processing (Models, Training, and Inference)

Summarization in Quantized Transformer Spaces

Mirella Lapata


Abstract:

Deep generative models with latent variables have become a major focus n of NLP research over the past several years. These models have been used both for generating text and as a way of learning latent representations of text for downstream tasks. While much previous work uses continuous latent variables, discrete variables are attractive because they are more interpretable and typically more space efficient. In this talk we consider learning discrete latent variable models with Quantized Variational Autoencoders, and show how these can be ported to the task of opinion summarization. We provide a clustering interpretation of the quantized space and a novel extraction algorithm to discover popular opinions among hundreds of reviews, a significant step towards opinion summarization of practical scope. We further demonstrate how this approach enables controllable summarization without further training, by utilizing properties of the quantized space to extract aspect-specific summaries.