Skip to yearly menu bar Skip to main content


Poster

Distilled Wasserstein Learning for Word Embedding and Topic Modeling

Hongteng Xu · Wenlin Wang · Wei Liu · Lawrence Carin

Room 517 AB #103

Keywords: [ Natural Language Processing ] [ Text Analysis ] [ Applications ]


Abstract:

We propose a novel Wasserstein method with a distillation mechanism, yielding joint learning of word embeddings and topics. The proposed method is based on the fact that the Euclidean distance between word embeddings may be employed as the underlying distance in the Wasserstein topic model. The word distributions of topics, their optimal transport to the word distributions of documents, and the embeddings of words are learned in a unified framework. When learning the topic model, we leverage a distilled ground-distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports. Such a strategy provides the updating of word embeddings with robust guidance, improving algorithm convergence. As an application, we focus on patient admission records, in which the proposed method embeds the codes of diseases and procedures and learns the topics of admissions, obtaining superior performance on clinically-meaningful disease network construction, mortality prediction as a function of admission codes, and procedure recommendation.

Live content is unavailable. Log in and register to view live content