Skip to yearly menu bar Skip to main content


Session

Demonstrations 1

Abstract:

Chat is not available.

Tue 8 Dec. 6:00 - 6:20 PST

MONICA: MObile Neural voIce Command Assistant for mobile games

Youshin Lim · Yoonseok Hong · Shounan An · Jaegeon Jo · HANOOK LEE · Su Hyeon Jeong · Yoo Hyun Eum · Sunwoo Im · Insoo Oh

Recently deep learning based on-device automatic speech recognition (ASR) shows breakthrough progress. However, in literature, there is no concrete work about integrating on-device ASR into mobile games as a voice user interface. The difficulties to deploy ASR into mobile games is that most game users want a quick responding voice command interface with no time delay. Therefore a need to design an on-device ASR system which costs minimal memory and CPU resources rises. To this end, we propose transformer based on-device ASR named MObile Neural voIce Command Assistant (MONICA) for mobile games. With MONICA, users could conduct game actions using voice commands only, such as "enter the monster dungeon", "start the auto-quest", "open the inventory" etc.To the best of our knowledge, this is the first work trying to resolve an on-device ASR task for mobile games at the service level. MONICA reduces the number of parameters in the neural network to 10% and speeds up the inference time by more than 5 times compared to the baseline transformer model while retaining minimal recognition accuracy degradation. We perform a web-based interactive live demonstration of MONICA as a voice user interface for an online chess game. Also, a demonstration video shows MONICA integrated into A3: Still Alive, which is a major game from Netmarble serviced in South Korea. MONICA will be on the service as a voice command interface for all A3 users very soon this year. Finally, we release a mobile application so that you could download and test the efficiency of MONICA on your mobile device.

Tue 8 Dec. 6:20 - 6:40 PST

tspDB: Time Series Predict DB

Anish Agarwal · Abdullah Alomar · Devavrat Shah

An important goal in Systems for ML is to make ML broadly accessible. Arguably, the major bottleneck is not the lack of access to prediction algorithms, for which many excellent open-source ML libraries exist. Rather, it is the complex data engineering required to take data from a datastore or database (DB) into a particular work environment format (e.g. spark data-frame) so that a prediction algorithm can be trained, and to do so in a scalable manner. This is further exacerbated as ML algorithms are now trained on large volumes of data, yet we need predictions in real-time. This is especially true in a variety of time-series applications such as finance and real-time control systems.

Towards easing this bottleneck, we showcase tspDB – a system that enables predictive query functionality in any existing time-series relational DB (open-source available at tspDB.mit.edu). Specifically, tspDB enables two types of predictive queries for time series data: (i) imputing a missing/noisy observation for a data point we do observe; (ii) forecasting a data point in the future. In tspDB the ML workflow is entirely abstracted away from the user; instead a single interface to answer both a predictive query and a standard SQL SELECT query is exposed. Pleasingly, we find tspDB statistically outperforms industry standard deep-learning based time series methods (e.g. DeepAR, LSTM’s) on benchmark time series datasets; further, tspDB’s computational performance is close to the time it takes to just insert and read data from PostgreSQL, making it a real-time prediction system.

The demo itself will be run entirely through a Google Colab notebook that users can access through a browser and will require no software installation. The notebook will walk through how to use tspDB to make predictive SQL queries on retail, energy and financial data, and how to measure its computational performance with respect to standard SQL queries. A pre-recording of the entire demo will also be provided.

Tue 8 Dec. 6:40 - 7:00 PST

Probing Embedding Spaces in Deep Neural Networks

Junior Rojas · Bilal Alsallakh · Edward Wang · Sara Zhang · Jonathan Reynolds · Narine Kokhlikyan · Vivek Miglani · Carlos Araya · Tony Chu · Orion Reblitz-Richardson

We demonstrate an interactive UI to explore neural embedding spaces by probing directions in these spaces, determined by component analysis such as PCA and ICA. It provides (1) Fluid overview+detail exploration of these directions with multi-modal viewers to inspect individual samples including images, audio clips, words, and sample attributes, (2) A dedicated view to analyze the development of embedding spaces over multiple layers, and (3) A dedicated view to compare the embedding spaces across different models.

Tue 8 Dec. 7:00 - 7:20 PST

IBM Federated Learning Community Edition: An Interactive Demonstration

Laura Wynter · Chaitanya Kumar · Pengqian Yu · Mikhail Yurochkin · Amogh Tarcar

Federated Learning (FL) is a means to train machine learning models without centralizing data. To deal with the ever-growing demands for training data whilst respecting data privacy and confidentiality, it has become important to move from centralized to federated machine learning. The IBM Federated Learning Community Edition is one means for achieving this goal; it is a platform and library, free to use for non-commercial purposes, with built-in features that facilitate enterprise-strength applications: \url{https://github.com/IBM/federated-learning-lib}. This interactive demo session highlights several featured algorithms available only in the IBM Federated Learning Community Edition, and provides tutorials, audience-interactive examples, and a guest speaker from the tech company Persistent Systems who has used the IBM Federated Learning Community Edition for Covid-19 outcome prediction for hospitals.

Tue 8 Dec. 7:20 - 7:40 PST

MolDesigner: Interactive Design of Efficacious Drugs with Deep Learning

Kexin Huang · Tianfan Fu · Dawood Khan · Ali Abid · Ali Abdalla · Abubaker Abid · Lucas Glass · Marinka Zitnik · Cao Xiao · Jimeng Sun

The efficacy of a drug depends on its binding affinity to the therapeutic target and pharmacokinetics. Deep learning (DL) has demonstrated remarkable progress in predicting drug efficacy. We develop MolDesigner, a human-in-the-loop web user-interface (UI), to assist drug developers leverage DL predictions to design more effective drugs. A developer can draw a drug molecule in the interface. In the backend, more than 17 state-of-the-art DL models generate predictions on important indices that are crucial for a drug's efficacy. Based on these predictions, drug developers can edit the drug molecule and reiterate until satisfaction. MolDesigner can make predictions in real-time with a latency of less than a second.

Tue 8 Dec. 7:40 - 8:00 PST

MosAIc: Finding Artistic Connections across Culture with Conditional Image Retrieval

Mark Hamilton · Stephanie Fu · Mindren Lu · Johnny Bui · Margaret Wang · Felix Tran · Marina Rogers · Darius Bopp · Christopher Hoder · Lei Zhang · Bill Freeman

We introduce MosAIc, an interactive website that allows users to discover hidden connections between works of art across culture, media, artists, and time. MosAIc finds "visual analogies", or works of art with the same semantic structure but very different cultural and artistic context, within the combined works of the Metropolitan Museum of Art and the Rijksmuseum. Users can take any work from the collection and find analogous works in particular genres, cultures, or media of art. Our approach finds visual analogies that mirror larger scale cultural trends, such as the flows of artistic techniques across the globe due to trade routes. Our approach is based on generalizing deep image retrieval methods to flexibly adapt to logical filters and predicates. This allows image retrieval methods to find close matches in different regions of the image collection, an approach we call "Conditional Image Retrieval".

Tue 8 Dec. 8:00 - 8:20 PST

RetaiL: Open your own grocery store to reduce waste

Sami Jullien · Sebastian Schelter · Maarten de Rijke

Food waste is a major societal, environmental, and financial problem. One of the main actors are grocery stores. Policies for reducing food waste in those are complex due to a large number of uncertain heterogeneous factors like non-fully predictable demand.

Directly comparing food waste reduction policies through field experimentation is contrary to the very target of food waste reduction. This is why we propose RetaiL, a new simulation framework, to optimise grocery store restocking for waste reduction. RetaiL offers its users the possibility to create synthetic product data, based on real data from a European retailer. It then matches simulated customer demand to a restocking policy for those items, and evaluates a utility function based on generated waste, item availability to customers and sales. This allows RetaiL to function as a new Reinforcement Learning Task, where the agent has to act on restocking level given the state of the store, and receives this utility function as a reward.

In this demo, we let you open your own grocery store and manage its orders to the warehouse. Can you help in the fight against food waste?

Tue 8 Dec. 8:20 - 8:40 PST

PrototypeML: Visual Design of Arbitrarily Complex Neural Networks

Daniel Harris

Neural network architectures are most often conceptually designed and described in visual terms, but are implemented by writing error-prone code. PrototypeML is a neural network development environment that bridges the dichotomy between the design and development processes: it provides a highly intuitive visual neural network design interface that supports (yet abstracts) the full dynamic graph capabilities of the PyTorch deep learning framework, reduces model design and development time, makes debugging easier, and automates many framework and code writing idiosyncrasies. Through a hybrid code and visual approach, PrototypeML resolves deep learning development deficiencies without limiting network expressiveness or reducing code quality, and provides real-world benefits for research, industry and teaching.

Join us for a live overview (and Q&A) of the PrototypeML platform during the conference, and explore the on-demand interactive platform demonstration: https://prototypeml.com/neurips2020

Tue 8 Dec. 8:40 - 9:00 PST

A Knowledge Graph Reasoning Prototype

Lihui Liu · Boxin Du · Heng Ji · Hanghang Tong

Reasoning is a fundamental capability for distilling valuable information from knowledge graphs. Existing work has primarily been focusing on point-wise reasoning, including search, link predication, entity prediction, subgraph matching and so on. We introduce comparative reasoning over knowledge graphs, which aims to infer the commonality and inconsistency with respect to multiple pieces of clues.

We develop a large-scale prototype system that integrates various point-wise reasoning functions as well as the newly proposed comparative reasoning capability over knowledge graphs. We present both the system architecture and its key functions.

Tue 8 Dec. 9:00 - 9:20 PST

Shared Interest: Human Annotations vs. AI Saliency

Angie Boggust · Benjamin Hoover · Arvind Satyanarayan · Hendrik Strobelt

As deep learning is applied to high stakes scenarios, it is increasingly important that a model is not only making accurate decisions, but doing so for the right reasons. Common explainability methods provide pixel attributions as an explanation for a model's decision on a single image; however, using input-level explanations to understand patterns in model behavior is challenging for large datasets as it requires selecting and analyzing an interesting subset of inputs. Utilizing human generated ground truth object locations, we introduce metrics for ranking inputs based on the correspondence between the input’s ground truth location and the explainability method’s explanation region. Our methodology is agnostic to model architecture, explanation method, and dataset allowing it to be applied to many tasks. We demo our method on two high profile scenarios: a widely used image classification model and a melanoma prediction model, showing it surfaces patterns in model behavior by aligning model explanations with human annotations.