Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Graph Learning

Variational Graph Auto-Encoders for Heterogeneous Information Network

Abhishek Dalvi · Ayan Acharya · Jing Gao · Vasant Honavar

Keywords: [ Attention Models ] [ Variational Inference ] [ graph neural networks ] [ Heterogeneous Graphs ]


Abstract:

Heterogeneous networks, where nodes and their attributes denote real-world entities and links encode relationships between entities, are ubiquitous in many applications, including social media, recommender systems, biological networks, molecular structures, and brain networks. Such applications typically involve node classification or link prediction based on the attributes of the nodes and their connections in the graph. Graph neural networks naturally model the topological structures of \emph{homogeneous} graphs by learning low-dimensional embeddings of nodes. However, heterogeneous graphs with various types of nodes and links pose significant challenges in learning efficient low-dimensional embeddings. Existing methods for learning such representation rely on computationally expensive sampling of meta-paths and heuristic adaptation of models developed for homogeneous networks. Therefore, the computational inefficiency and the generalization performance of such methods are frequently criticized and scrutinized, preventing their adoption in solving real-world problems. To mitigate such issues, we present three variants of graph variational autoencoder models for heterogeneous networks that avoid the computationally expensive sampling of meta-paths and maintain uncertainty estimates of node embeddings that help with better generalization. We report the results of experiments on link prediction using three different real-world benchmark heterogeneous network data sets that show that the proposed methods significantly outperform state-of-the-art baselines.

Chat is not available.