Skip to yearly menu bar Skip to main content


Poster

Modeling Transitivity and Cyclicity in Directed Graphs via Binary Code Box Embeddings

Dongxu Zhang · Michael Boratko · Cameron Musco · Andrew McCallum

Hall J (level 1) #306

Keywords: [ cyclic graphs ] [ directed graphs ] [ geometric representation learning ] [ Graph Representation Learning ] [ transitivity ]


Abstract:

Modeling directed graphs with differentiable representations is a fundamental requirement for performing machine learning on graph-structured data. Geometric embedding models (e.g. hyperbolic, cone, and box embeddings) excel at this task, exhibiting useful inductive biases for directed graphs. However, modeling directed graphs that both contain cycles and some element of transitivity, two properties common in real-world settings, is challenging. Box embeddings, which can be thought of as representing the graph as an intersection over some learned super-graphs, have a natural inductive bias toward modeling transitivity, but (as we prove) cannot model cycles. To this end, we propose binary code box embeddings, where a learned binary code selects a subset of graphs for intersection. We explore several variants, including global binary codes (amounting to a union over intersections) and per-vertex binary codes (allowing greater flexibility) as well as methods of regularization. Theoretical and empirical results show that the proposed models not only preserve a useful inductive bias of transitivity but also have sufficient representational capacity to model arbitrary graphs, including graphs with cycles.

Chat is not available.