Skip to yearly menu bar Skip to main content


Poster

Context Selection for Embedding Models

Liping Liu · Francisco Ruiz · Susan Athey · David Blei

Pacific Ballroom #20

Keywords: [ Unsupervised Learning ] [ Embedding Approaches ]


Abstract:

Word embeddings are an effective tool to analyze language. They have been recently extended to model other types of data beyond text, such as items in recommendation systems. Embedding models consider the probability of a target observation (a word or an item) conditioned on the elements in the context (other words or items). In this paper, we show that conditioning on all the elements in the context is not optimal. Instead, we model the probability of the target conditioned on a learned subset of the elements in the context. We use amortized variational inference to automatically choose this subset. Compared to standard embedding models, this method improves predictions and the quality of the embeddings.

Live content is unavailable. Log in and register to view live content