Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Graph Learning

Invertible Neural Networks for Graph Prediction

Chen Xu · Xiuyuan Cheng · Yao Xie

Keywords: [ normalizing flow ] [ Graph neural network ] [ Invertible Neural Networks ]


Abstract: In this work, we address conditional generation using deep invertible neural networks. This is a type of problem where one aims to infer the most probable inputs $X$ given outcomes $Y$. We call our method \textit{invertible graph neural network} (iGNN) due to the primary focus on generating node features on graph data. A notable feature of our proposed methods is that during network training, we revise the typically-used loss objective in normalizing flow and consider Wasserstein-2 regularization to facilitate the training process. Algorithmic-wise, we adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once through a single model. Theoretically, we study the expressiveness of iGNN in learning the mapping through utilizing the Fokker-Planck equation of an Ornstein-Uhlenbeck process. Experimentally, we verify the performance of iGNN on both simulated and real-data datasets. We demonstrate through extensive numerical experiments that iGNN shows clear improvement over competing conditional generation benchmarks on high-dimensional and/or non-convex data.

Chat is not available.