Timezone: »

Relational Representation Learning
Aditya Grover · Paroma Varma · Frederic Sala · Christopher Ré · Jennifer Neville · Stefano Ermon · Steven Holtzen

Sat Dec 08 05:00 AM -- 03:30 PM (PST) @ Room 517 A
Event URL: https://r2learning.github.io/ »

Relational reasoning, i.e., learning and inference with relational data, is key to understanding how objects interact with each other and give rise to complex phenomena in the everyday world. Well-known applications include knowledge base completion and social network analysis. Although many relational datasets are available, integrating them directly into modern machine learning algorithms and systems that rely on continuous, gradient-based optimization and make strong i.i.d. assumptions is challenging. Relational representation learning has the potential to overcome these obstacles: it enables the fusion of recent advancements like deep learning and relational reasoning to learn from high-dimensional data. Success of such methods can facilitate novel applications of relational reasoning in areas like scene understanding, visual question-answering, reasoning over chemical and biological domains, program synthesis and analysis, and decision-making in multi-agent systems.

How should we rethink classical representation learning theory for relational representations? Classical approaches based on dimensionality reduction techniques such as isoMap and spectral decompositions still serve as strong baselines and are slowly paving the way for modern methods in relational representation learning based on random walks over graphs, message-passing in neural networks, group-invariant deep architectures etc. amongst many others. How can systems be designed and potentially deployed for large scale representation learning? What are promising avenues, beyond traditional applications like knowledge base and social network analysis, that can benefit from relational representation learning?

This workshop aims to bring together researchers from both academia and industry interested in addressing various aspects of representation learning for relational reasoning.Topics include, but are not limited to:

* Algorithmic approaches. E.g., probabilistic generative models, message-passing neural networks, embedding methods, dimensionality reduction techniques, group-invariant architectures etc. for relational data
* Theoretical aspects. E.g., when and why do learned representations aid relational reasoning? How does the non-i.i.d. nature of relational data conflict with our current understanding of representation learning?
* Optimization and scalability challenges due to the inherent discreteness and curse of dimensionality of relational datasets
* Evaluation of learned relational representations
* Security and privacy challenges
* Domain-specific applications
* Any other topic of interest

Sat 5:45 a.m. - 6:00 a.m.

Adversarial training has become the de facto standard for generative modeling. While adversarial approaches have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety, they are not applicable when one wishes the generated distribution to recover some —but not all— aspects of it. For example, one might be interested in modeling purely relational or topological aspects (such as cluster or manifold structure) while ignoring or constraining absolute characteristics (e.g., global orientation in Euclidean spaces). Furthermore, such absolute aspects are not available if the data is provided in an intrinsically relational form, such as a weighted graph. In this work, we propose an approach to learn generative models across such incomparable spaces that relies on the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. We show how the resulting framework can be used to learn distributions across spaces of different dimensionality or even different data types.

Sat 6:00 a.m. - 6:30 a.m.
Invited Talk 1 (Talk)
Marina Meila
Sat 6:30 a.m. - 6:45 a.m.
Contributed Talk 2 (Talk)
Lingfei Wu
Sat 6:45 a.m. - 7:15 a.m.
Invited Talk 2 (Talk)
Timothy Lillicrap
Sat 7:15 a.m. - 7:30 a.m.
Spotlights (Talks)
Guangneng Hu, Ke Li, Aviral Kumar, Vu Tran, Samuel Fadel, Rita Kuznetsova, Bong-Nam Kang, Behrouz Haji Soleimani, Jinwon An, Nathan de Lara, Anjishnu Kumar, Tillman Weyde, Melanie Weber, Kristen Altenburger, Saeed Amizadeh, Xiaoran (Sean) Xu, Yatin Nandwani, Yang Guo, Maria Pacheco, Liam Fedus, Guillaume Jaume, Yuka Yoneda, Yunpu Ma, Yunsheng Bai, Berk Kapicioglu, Maximilian Nickel, Fragkiskos Malliaros, Beier Zhu, Aleksandar Bojchevski, Joshua Joseph, Gemma Roig, Esma Balkir, Xander Steenbrugge
Sat 8:00 a.m. - 8:30 a.m.
Invited Talk 3 (Talk)
Joan Bruna
Sat 8:30 a.m. - 8:45 a.m.
Contributed Talk 3 (Talk)
Yunsheng Bai
Sat 8:45 a.m. - 9:15 a.m.
Invited Talk 4 (Talk)
Maximillian Nickel
Sat 11:00 a.m. - 11:30 a.m.
Invited Talk 5 (Talk)
Lise Getoor
Sat 11:30 a.m. - 11:45 a.m.
Contributed Talk 4 (Talk)
Róbert Csordás
Sat 11:45 a.m. - 12:00 p.m.
Spotlights 2 (Talks)
Mausam , Ankit Anand, Parag Singla, Tarik Koc, Tim Klinger, Habibeh Naderi, Sungwon Lyu, Saeed Amizadeh, Kshitij Dwivedi, Songpeng Zu, Wei Feng, Balaraman Ravindran, Edouard Pineau, Abdulkadir Celikkanat, Deepak Venugopal
Sat 12:30 p.m. - 1:00 p.m.
Invited Talk 6 (Talk)
Pedro Domingos
Sat 1:00 p.m. - 1:45 p.m.
Panel (Discussion Panel)
Paroma Varma, Aditya Grover, Will Hamilton, Jessica Hamrick, Thomas Kipf, Marinka Zitnik
Sat 1:45 p.m. - 2:45 p.m.
Poster Session (Poster)
Sat 2:45 p.m. - 3:00 p.m.
Concluding Remarks (Talk)

Author Information

Aditya Grover (Stanford University)
Paroma Varma (Stanford University)
Fred Sala (Stanford)
Chris Ré (Stanford)
Jennifer Neville (Purdue University)
Stefano Ermon (Stanford)
Steven Holtzen (University of California, Los Angeles)

More from the Same Authors