Over the last decade, deep networks have propelled machine learning to accomplish tasks previously considered far out of reach, human-level performance in image classification and game-playing. However, research has also shown that the deep networks are often brittle to distributional shifts in data: it has been shown that human-imperceptible changes can lead to absurd predictions. In many application areas, including physics, robotics, social sciences and life sciences, this motivates the need for robustness and interpretability, so that deep networks can be trusted in practical applications. Interpretable and robust models can be constructed by incorporating prior knowledge within the model or learning process as an inductive bias, thereby regularizing the model, avoiding overfitting, and making the model easier to understand for scientists who are non-machine-learning experts. Already in the last few years researchers from different fields have proposed various combinations of domain knowledge and machine learning and successfully applied these techniques to various applications.
| Introduction (Live) | |
| Thomas Pierrot - Learning Compositional Neural Programs for Continuous Control (Contributed Talk) | |
| Jessica Hamrick - Structured Computation and Representation in Deep Reinforcement Learning (Invited Talk) | |
| Manu Kalia - Deep learning of normal form autoencoders for universal, parameter-dependent dynamics (Contributed Talk) | |
| Rose Yu - Physics-Guided AI for Learning Spatiotemporal Dynamics (Invited Talk) | |
| Ferran Alet - Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time (Contributed Talk) | |
| Poster Session 1 (Poster Session) | |
| Frank Noé - PauliNet: Deep Neural Network Solution of the Electronic Schrödinger Equation (Invited Talk) | |
| Kimberly Stachenfeld - Graph Networks with Spectral Message Passing (Contributed Talk) | |
| Franziska Meier - Inductive Biases for Models and Learning-to-Learn (Invited Talk) | |
| Rui Wang - Shapley Explanation Networks (Contributed Talk) | |
| Jeanette Bohg - One the Role of Hierarchies for Learning Manipulation Skills (Invited Talk) | |
| Panel Discussion | |
| 14 - Learning Dynamical Systems Requires Rethinking Generalization (Poster) | |
| 15 - Lie Algebra Convolutional Networks with Automatic Symmetry Extraction (Poster) | |
| 16 - An Image is Worth 16 × 16 Tokens: Visual Priors for Efficient Image Synthesis with Transformers (Poster) | |
| 18 - Simulating Surface Wave Dynamics with Convolutional Networks (Poster) | |
| 19 - Choice of Representation Matters for Adversarial Robustness (Poster) | |
| 20 -SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency (Poster) | |
| 21 - Solving Physics Puzzles by Reasoning about Paths (Poster) | |
| 22 - Modelling Advertising Awareness, an Interpretable and Differentiable Approach (Poster) | |
| 24 - Deep Context-Aware Novelty Detection (Poster) | |
| 13 - Gradient-based Optimization for Multi-resource Spatial Coverage (Poster) | |
| 23 - Constraining neural networks output by an interpolating loss function with region priors (Poster) | |
| 3 - Improving the trustworthiness of image classification models by utilizing bounding-box annotations (Poster) | |
| 25 - Complex Skill Acquisition through Simple Skill Imitation Learning (Poster) | |
| Poster Session 2 (Posters) | |
| 26 - Is the Surrogate Model Interpretable? (Poster) | |
| 1 - Real-time Classification from Short Event-Camera Streams using Input-filtering Neural ODEs (Poster) | |
| 2 - Relevance of Rotationally Equivariant Convolutions for Predicting Molecular Properties (Poster) | |
| 4 - Physics-informed Generative Adversarial Networks for Sequence Generation with Limited Data (Poster) | |
| 5 - On the Structure of Cyclic Linear Disentangled Representations (Poster) | |
| 6 - Interpretable Models for Granger Causality Using Self-explaining Neural Networks (Poster) | |
| 8 - Individuality in the hive - Learning to embed lifetime social behavior of honey bees (Poster) | |
| 9 - Thermodynamic Consistent Neural Networks for Learning Material Interfacial Mechanics (Poster) | |
| 10 - A Trainable Optimal Transport Embedding for Feature Aggregation (Poster) | |
| 11 - A novel approach for semiconductor etching process with inductive biases (Poster) | |
| 12 - Physics-aware, data-driven discovery of slow and stable coarse-grained dynamics for high-dimensional multiscale systems (Poster) | |
| 12 - IV-Posterior: Inverse Value Estimation forInterpretable Policy Certificates (Poster) | |
| 7 - A Symmetric and Object-Centric World Model for Stochastic Environments (Poster) | |
| 17 - Uncovering How Neural Network Representations Vary with Width and Depth (Poster) | |
| Liwei Chen - Deep Learning Surrogates for Computational Fluid Dynamics (Contributed Talk) | |
| Maziar Raissi - Hidden Physics Models (Invited Talk) | |
| Closing Remarks (Live) | |