Demonstrations must show novel technology and must run online during the conference. Unlike poster presentations or slide shows, interaction with the audience is a critical element. Therefore, the creativity of demonstrators to propose new ways in which interaction and engagement can fully leverage this year’s virtual conference format will be particularly relevant for selection. This session has the following demonstrations:
Thu 8:30 a.m. - 8:35 a.m.
|
Intro
(
Talk
)
|
Marco Ciccone 🔗 |
Thu 8:35 a.m. - 8:50 a.m.
|
PYLON: A PyTorch Framework for Learning with Constraints
(
Live Demo
)
link »
Deep learning excels at learning task information from large amounts of data, however, struggles with learning from declarative high-level knowledge that can be more succinctly expressed directly. In this work, we introduce PYLON, a neural-symbolic training framework that builds on PyTorch to augment imperatively trained models with declaratively specified knowledge. PYLON lets users programmatically specify constraints as Python functions and compiles them into a differentiable loss, thus training predictive models that fit the data whilst satisfying the specified constraints. PYLON includes both exact as well as approximate compilers to efficiently compute the loss, employing fuzzy logic, sampling methods,and circuits, ensuring scalability even to complex models and constraints. Crucially, a guiding principle in designing PYLON is the ease with which any existing deep learning codebase can be extended to learn from constraints using only a few lines: a function that expresses the constraint and code to incorporate it as a loss. Our demo comprises of models in NLP, computer vision, logical games, and knowledge graphs that can be interactively trained using constraints as supervision. |
Kareem Ahmed · Tao Li · Nu Mai Thy Ton · Quan Guo · Kai-Wei Chang · Parisa Kordjamshidi · Vivek Srikumar · Guy Van den Broeck · Sameer Singh 🔗 |
Thu 8:50 a.m. - 9:05 a.m.
|
Real-Time and Accurate Self-Supervised Monocular Depth Estimation on Mobile Device
(
Live Demo
)
link »
This demonstration showcases our novel innovations on self-supervised monocular depth estimation. First, we enhance self-supervised monocular depth estimation with semantic information during training. This reduces the error by 12% and achieves state-of-the-art performance. Second, we enhance the backbone architecture using a scalable method for neural architecture search which optimizes directly for inference latency on a target device. This enables operation at > 30 FPS. We demonstrate these techniques on a smartphone powered by a Snapdragon® Mobile Platform. |
Hong Cai · Yinhao Zhu · Janarbek Matai · Fatih Porikli · Fei Yin · Tushar Singhal · Bharath Ramaswamy · Frank Mayer · Chirag Patel · Parham Noorzad · Andrii Skliar · Tijmen Blankevoort · Joseph Soriaga · Ron Tindall · Pat Lawlor
|
Thu 9:05 a.m. - 9:20 a.m.
|
Unsupervised Indoor Wi-Fi Positioning
(
Live Demo
)
link »
Sensing using radio frequency (RF) signals such as Wi-Fi has garnered significant attention in recent years. They can be used, for instance, for so-called passive indoor positioning of humans. This passive positioning uses the Wi-Fi signal as a bi-static radar to determine the location of a human subject who is not carrying any Wi-Fi device. While previous works have demonstrated that positioning is possible, these algorithms rely on precise position labels for training, and only work in confined laboratory environments that must remain invariant. We recently proposed two novel algorithms for passive positioning. The first is based on a self-supervision signal by a combined clustering and triplet loss. The second is modality-agnostic and is based on a low-dimensional manifold learning facilitated by optimal transport. Neither algorithm requires dense labels as required by state of the art algorithms. In this demo, we demonstrate results of these two algorithms in real-world environments, i.e., outside of carefully controlled labs. The presented results demonstrate that our methods surpass state of the art by a wide margin. |
Farhad G. Zanjani · Ilia Karmanov · Hanno Ackermann · Daniel Dijkman · Max Welling · Ishaque Kadampot · Simone Merlin · Steve Shellhammer · Rui Liang · Brian Buesker · Harshit Joshi · Vamsi Vegunta · Raamkumar Balamurthi · Bibhu Mohanty · Joseph Soriaga · Ron Tindall · Pat Lawlor
|
Thu 9:20 a.m. - 9:35 a.m.
|
Prospective Explanations: An Interactive Mechanism for Model Understanding
(
Live Demo
)
link »
We demonstrate a system for prospective explanations of black box models for regression and classification tasks with structured data. Prospective explanations are aimed at showing how models work by highlighting likely changes in model outcomes under changes in input. This in contrast to most post-hoc explanability methods, that aim to provide a justification for a decision retrospectively. Our system is designed to provide fast estimates of changes in outcomes for any arbitrary exploratory query from users. Such queries are typical partial, i.e. involve only a selected number of features, the outcomes labels are shown therefore as likelihoods. Repeated queries can therefore indicate which aspects of the feature space are more likely to influence the target variable. Fast interactive exploration is made possible by a surrogate Bayesian network model trained on model labels with some reasonable assumptions on architectures. The main advantages of our approach are that (a) inference is very fast and supports real-time feedback allowing for interactivity, (b) inference can be done with partial information on features, and (c) any indirect effects are also considered in estimating target class distributions. |
Rahul Nair · Pierpaolo Tommasi 🔗 |