Skip to yearly menu bar Skip to main content


Workshop

Relational Representation Learning

Aditya Grover · Paroma Varma · Frederic Sala · Christopher RĂ© · Jennifer Neville · Stefano Ermon · Steven Holtzen

Room 517 A

Sat 8 Dec, 5 a.m. PST

Relational reasoning, i.e., learning and inference with relational data, is key to understanding how objects interact with each other and give rise to complex phenomena in the everyday world. Well-known applications include knowledge base completion and social network analysis. Although many relational datasets are available, integrating them directly into modern machine learning algorithms and systems that rely on continuous, gradient-based optimization and make strong i.i.d. assumptions is challenging. Relational representation learning has the potential to overcome these obstacles: it enables the fusion of recent advancements like deep learning and relational reasoning to learn from high-dimensional data. Success of such methods can facilitate novel applications of relational reasoning in areas like scene understanding, visual question-answering, reasoning over chemical and biological domains, program synthesis and analysis, and decision-making in multi-agent systems.

How should we rethink classical representation learning theory for relational representations? Classical approaches based on dimensionality reduction techniques such as isoMap and spectral decompositions still serve as strong baselines and are slowly paving the way for modern methods in relational representation learning based on random walks over graphs, message-passing in neural networks, group-invariant deep architectures etc. amongst many others. How can systems be designed and potentially deployed for large scale representation learning? What are promising avenues, beyond traditional applications like knowledge base and social network analysis, that can benefit from relational representation learning?

This workshop aims to bring together researchers from both academia and industry interested in addressing various aspects of representation learning for relational reasoning.Topics include, but are not limited to:

* Algorithmic approaches. E.g., probabilistic generative models, message-passing neural networks, embedding methods, dimensionality reduction techniques, group-invariant architectures etc. for relational data
* Theoretical aspects. E.g., when and why do learned representations aid relational reasoning? How does the non-i.i.d. nature of relational data conflict with our current understanding of representation learning?
* Optimization and scalability challenges due to the inherent discreteness and curse of dimensionality of relational datasets
* Evaluation of learned relational representations
* Security and privacy challenges
* Domain-specific applications
* Any other topic of interest

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content