Timezone: »
Most topic modeling approaches are based on the bag-of-words assumption, where each word is required to be conditionally independent in the same document. As a result, both of the generative story and the topic formulation have totally ignored the semantic dependency among words, which is important for improving the semantic comprehension and model interpretability. To this end, in this paper, we revisit the task of topic modeling by transforming each document into a directed graph with word dependency as edges between word nodes, and develop a novel approach, namely Graph Neural Topic Model (GNTM). Specifically, in GNTM, a well-defined probabilistic generative story is designed to model both the graph structure and word sets with multinomial distributions on the vocabulary and word dependency edge set as the topics. Meanwhile, a Neural Variational Inference (NVI) approach is proposed to learn our model with graph neural networks to encode the document graphs. Besides, we theoretically demonstrate that Latent Dirichlet Allocation (LDA) can be derived from GNTM as a special case with similar objective functions. Finally, extensive experiments on four benchmark datasets have clearly demonstrated the effectiveness and interpretability of GNTM compared with state-of-the-art baselines.
Author Information
Dazhong Shen (University of Science and Technology of China)
Chuan Qin (University of Science and Technology of China,)
Chao Wang (University of Science and Technology of China)
Zheng Dong (University of British Columbia)
Hengshu Zhu (Baidu)
Hui Xiong
More from the Same Authors
-
2022 Spotlight: Lightning Talks 2B-3 »
Jie-Jing Shao · Jiangmeng Li · Jiashuo Liu · Zongbo Han · Tianyang Hu · Jiayun Wu · Wenwen Qiang · Jun WANG · Zhipeng Liang · Lan-Zhe Guo · Wenjia Wang · Yanan Zhang · Xiao-wen Yang · Fan Yang · Bo Li · Wenyi Mo · Zhenguo Li · Liu Liu · Peng Cui · Yu-Feng Li · Changwen Zheng · Lanqing Li · Yatao Bian · Bing Su · Hui Xiong · Peilin Zhao · Bingzhe Wu · Changqing Zhang · Jianhua Yao -
2022 Spotlight: Lightning Talks 2B-2 »
Chenjian Gao · Rui Ding · Lingzhi LI · Fan Yang · Xingting Yao · Jianxin Li · Bing Su · Zhen Shen · Tongda Xu · Shuai Zhang · Ji-Rong Wen · Lin Guo · Fanrong Li · Kehua Guo · Zhongshu Wang · Zhi Chen · Xiangyuan Zhu · Zitao Mo · Dailan He · Hui Xiong · Yan Wang · Zheng Wu · Wenbing Tao · Jian Cheng · Haoyi Zhou · Li Shen · Ping Tan · Liwei Wang · Hongwei Qin -
2022 Spotlight: AutoST: Towards the Universal Modeling of Spatio-temporal Sequences »
Jianxin Li · Shuai Zhang · Hui Xiong · Haoyi Zhou -
2022 Spotlight: MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning »
Jiangmeng Li · Wenwen Qiang · Yanan Zhang · Wenyi Mo · Changwen Zheng · Bing Su · Hui Xiong -
2021 Poster: Discerning Decision-Making Process of Deep Neural Networks with Hierarchical Voting Transformation »
Ying Sun · Hengshu Zhu · Chuan Qin · Fuzhen Zhuang · Qing He · Hui Xiong