Timezone: »
Multi-modal knowledge graph embeddings (KGE) have caught more and more attention in learning representations of entities and relations for link prediction tasks. Different from previous uni-modal KGE approaches, multi-modal KGE can leverage expressive knowledge from a wealth of modalities (image, text, etc.), leading to more comprehensive representations of real-world entities. However, the critical challenge along this course lies in that the multi-modal embedding spaces are usually heterogeneous. In this sense, direct fusion will destroy the inherent spatial structure of different modal embeddings. To overcome this challenge, we revisit multi-modal KGE from a distributional alignment perspective and propose optimal transport knowledge graph embeddings (OTKGE). Specifically, we model the multi-modal fusion procedure as a transport plan moving different modal embeddings to a unified space by minimizing the Wasserstein distance between multi-modal distributions. Theoretically, we show that by minimizing the Wasserstein distance between the individual modalities and the unified embedding space, the final results are guaranteed to maintain consistency and comprehensiveness. Moreover, experimental results on well-established multi-modal knowledge graph completion benchmarks show that our OTKGE achieves state-of-the-art performance.
Author Information
Zongsheng Cao (State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences)
Qianqian Xu (Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences)
Zhiyong Yang (Chinese Academy of Sciences)
Yuan He (Alibaba Group)
Xiaochun Cao (SUN YAT-SEN UNIVERSITY)
Qingming Huang (University of Chinese Academy of Sciences)
More from the Same Authors
-
2022 Poster: OPEN: Orthogonal Propagation with Ego-Network Modeling »
Liang Yang · Lina Kang · Qiuliang Zhang · Mengzhe Li · bingxin niu · Dongxiao He · Zhen Wang · Chuan Wang · Xiaochun Cao · Yuanfang Guo -
2022 Poster: Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm »
HuiYang Shao · Qianqian Xu · Zhiyong Yang · Shilong Bao · Qingming Huang -
2022 Poster: Exploring the Algorithm-Dependent Generalization of AUPRC Optimization with List Stability »
Peisong Wen · Qianqian Xu · Zhiyong Yang · Yuan He · Qingming Huang -
2022 Spotlight: OpenAUC: Towards AUC-Oriented Open-Set Recognition »
Zitai Wang · Qianqian Xu · Zhiyong Yang · Yuan He · Xiaochun Cao · Qingming Huang -
2022 Poster: OpenAUC: Towards AUC-Oriented Open-Set Recognition »
Zitai Wang · Qianqian Xu · Zhiyong Yang · Yuan He · Xiaochun Cao · Qingming Huang -
2022 Poster: Rethinking Image Restoration for Object Detection »
Shangquan Sun · Wenqi Ren · Tao Wang · Xiaochun Cao -
2022 Poster: The Minority Matters: A Diversity-Promoting Collaborative Metric Learning Algorithm »
Shilong Bao · Qianqian Xu · Zhiyong Yang · Yuan He · Xiaochun Cao · Qingming Huang -
2021 Poster: When False Positive is Intolerant: End-to-End Optimization with Low FPR for Multipartite Ranking »
Peisong Wen · Qianqian Xu · Zhiyong Yang · Yuan He · Qingming Huang -
2020 Poster: Heuristic Domain Adaptation »
Shuhao Cui · Xuan Jin · Shuhui Wang · Yuan He · Qingming Huang -
2019 Poster: Generalized Block-Diagonal Structure Pursuit: Learning Soft Latent Task Assignment against Negative Transfer »
Zhiyong Yang · Qianqian Xu · Yangbangyan Jiang · Xiaochun Cao · Qingming Huang -
2019 Poster: DM2C: Deep Mixed-Modal Clustering »
Yangbangyan Jiang · Qianqian Xu · Zhiyong Yang · Xiaochun Cao · Qingming Huang -
2019 Spotlight: DM2C: Deep Mixed-Modal Clustering »
Yangbangyan Jiang · Qianqian Xu · Zhiyong Yang · Xiaochun Cao · Qingming Huang -
2019 Poster: iSplit LBI: Individualized Partial Ranking with Ties via Split LBI »
Qianqian Xu · Xinwei Sun · Zhiyong Yang · Xiaochun Cao · Qingming Huang · Yuan Yao