Session
Tue Demonstrations
Reproducing Machine Learning Research on Binder
Jessica Forde · Tim Head · Chris Holdgraf · M Pacer · Félix-Antoine Fortin · Fernando Perez
Full author list is:
Jessica Zosa Forde Matthias Bussonnier Félix-Antoine Fortin Brian Granger Tim Head Chris Holdgraf Paul Ivanov Kyle Kelley Fernando Perez M Pacer Yuvi Panda Gladys Nalvarte Min Ragan-Kelley Zach Sailer Steven Silvester Erik Sundell Carol Willing
Researchers have encouraged the machine learning community to produce reproducible, complete pipelines for code. Binder is an open-source service that lets users share interactive, reproducible science. It uses standard configuration files in software engineering to create interactive versions of research that exist on sites like GitHub with minimal additional effort. By leveraging tools such as Kubernetes, it manages the technical complexity around creating containers to capture a repository and its dependencies, generating user sessions, and providing public URLs to share the built images with others. It combines two open-source projects within the Jupyter ecosystem: repo2docker and JupyterHub. repo2docker builds the Docker image of the git repository specified by the user, installs dependencies, and provides various front-ends to explore the image. JupyterHub then spawns and serves instances of these built images using Kubernetes to scale as needed. Our free public deployment, mybinder.org, features over 3,000 repos on topics such LIGO’s gravational waves, textbooks on Kalman Filters, and open-source libraries such as PyMC3. As of September 2018, it serves an average of 8,000 users per day and has served as many as 22,000 a given day. Our demonstration shares a Binder deployment that features machine learning research papers from GitHub.
Deep Neural Networks Running Onboard Anki’s Robot, Vector
Lorenzo Riano · Andrew Stein · Mark Palatucci
In August Anki unveiled Vector, a home robot focused on personality and character. Vector is a palm-sized bot packed with unprecedented functionality in a very computationally constrained package. Among other capabilities, he uses deep neural networks to recognize elements of interest in the world, like people, hands, and other objects. At NIPS, we will discuss how we designed and tested the neural network architectures, the unique constraints that we had to face, and the solutions we developed. Since our network will have to run on hundred of thousands of robots worldwide, we had to develop unique metrics and testing methodologies to ensure that it provides the right data to various components that depend on it. We will describe how we limited the network footprint by employing quantization and pruning, and generally running neural networks on a constrained CPU. We will also show how perception is integrated into the bigger behavioral system to create a robot that is compelling and fun to interact with.
TextWorld: A Learning Environment for Text-based Games
Marc-Alexandre Côté · Wendy Tay · Xingdi Yuan
Text-based games (e.g. Zork, Colossal Cave) are complex, interactive simulations in which text describes the game state and players make progress by entering text commands. They are fertile ground for language-focused machine learning research. In addition to language understanding, successful play requires skills like long-term memory and planning, exploration (trial and error), and common sense.
This demonstration is about TextWorld, a Python-based learning environment for text-based games. TextWorld can be used to play existing games, as the ALE does for Atari games. However, the real novelty is that TextWorld can generate new text-based games with desired complexity. Its generative mechanisms give precise control over the difficulty, scope, and language of constructed games, and can therefore be used to study generalization and transfer learning.
Ruuh: A Deep Learning Based Conversational Social Agent
Puneet Agrawal · Manoj Kumar Chinnakotla · Sonam Damani · Meghana Joshi · Kedhar Nath Narahari · Khyatti Gupta · Nitya Raviprakash · Umang Gupta · Ankush Chatterjee · Abhishek Mathur · Sneha Magapu
Dialogue systems and conversational agents are becoming increasingly popular in the modern society but building an agent capable of holding intelligent conversation with its users is a challenging problem for artificial intelligence. In this demo, we demonstrate a deep learning based conversational social agent called “Ruuh” (facebook.com/Ruuh) designed by a team at Microsoft India to converse on a wide range of topics. Ruuh needs to think beyond the utilitarian notion of merely generating “relevant” responses and meet a wider range of user social needs, like expressing happiness when user's favorite team wins, sharing a cute comment on showing the pictures of the user's pet and so on. The agent also needs to detect and respond to abusive language, sensitive topics and trolling behavior of the users. Our agent has interacted with over 2 million real world users till date which has generated over 150 million user conversations to date.
Game for Detecting Backdoor Attacks on Deep Neural Networks using Activation Clustering
Casey Dugan · Werner Geyer · Narendra Nath Joshi · Ingrid Lange · Dustin Ramsey Torres · Bryant Chen · Nathalie Baracaldo · Heiko Ludwig
NIPS Demo Submission_2.pdf
Deep learning to improve quality control in pharmaceutical manufacturing
Michael Sass Hansen · Sebastian Brandes Kraaijenzank
This demo shows how deep learning can be applied in pharmaceutical industry, specifically for the reducing rejection rates of non-defect products in drug manufacturing. Advancements in convolutional neural networks for classification and variational autoencoders for anomaly detection have generated such impressive results over the past couple of years that the technology is now starting to become mature enough to be useful in the real world. Many drug manufacturers rely on highly manual, expensive processes for running their quality control operations and, until now, they haven't had a technological alternative advanced enough for being able to optimize this part of their manufacturing pipeline. Deep learning is a true game changer in this industry and being able to increase efficiency in the production of drugs leads potential huge price reductions, making modern medicine available to more people in need -- especially among low-income groups. This demo shows how these advantages can be obtained, as we will bring a professional CVT machine (capable of inspecting up to 600 cartridges or vials per minute) fitted with a chain of neural networks who run in real time to analyze the products and make decisions on whether to release or reject the products that pass by. Attendees will be able to interact with the underlying models through an easy-to-use interface that allows for retraining of models based on new datasets as well as deployment of the models. The goal of the demo is to leave attendees with the impression that neural nets are indeed ready to be deployed into highly regulated industries with the purpose of making a positive difference for all of us.
A Hands-free Natural User Interface (NUI) for AR/VR Head-Mounted Displays Exploiting Wearer’s Facial Gestures
Jaekwang Cha · Shiho Kim · Jinhyuk Kim
The demonstration presents interactions between head mounted display (HMD) worn user and augmented reality (AR) environment with our state-of-the-art hands-free user interface (UI) device which catches user’s facial gesture as an input signal of the UI. AR systems used in complex environment, such as surgery or works in dangerous environment, require a hands-free UI because they must continue to use their hands during operation. Moreover, hands-free UI helps improve the user experience (UX), not only in such a complex environment but also in common usage of AR and virtual reality (VR). Even though demands on interface device for HMD environment, there have not been such optimized interface yet like keyboard and mouse for PC or touch interface for smartphone. The objective of our demo is to present attendees a hands-free AR UI experience and to introduce attendees to benefits of using hands-free interface when using AR HMD environment. In the demo, attendee can deliver commands to the system through the wink gesture instead of using today’s common HMD input interface such as hand-held remote controller or HMD buttons which interferes user immerse on HMD environment. The wink acts like mouse click in our demonstration presented AR world. The facial gestures of user are automatically mapped to commands through deep neural networks. The proposed UI system is very unique and appropriate to develop various natural user interface (NUI) for AR/VR HMD environment because the sensing mechanism does not interfere user and allows user to hands-free.
A model-agnostic web interface for interactive music composition by inpainting
Gaëtan Hadjeres · Théis Bazin · Ashis Pati
We present a web-based interface that allows users to compose symbolic music in an interactive way using generative models for music. We strongly believe that such models only reveal their potential when used by artists and creators. While generative models for music have been around for a while, the conception of interactive interfaces designed for music creators is only burgeoning. We contribute to this emerging area by providing a general web interface for many music generation models so that researchers in the domain can easily test and promote their works. We hope that the present work will contribute in making A.I.-assisted composition accessible to a wider audience, from non musicians to professional musicians. This work is a concrete application of using music inpainting as a creative tool and could additionally be of interest to researchers in the domain for testing and evaluating their models. We show how we can use this system (generative model + interface) using different inpainting algorithms in an actual music production environment. The key elements of novelty are: (a) easy-to-use and intuitive interface for users, (b) easy-to-plug interface for researchers allowing them to explore the potential of their music generation algorithms, (c) web-based and model-agnostic framework, (d) integration of existing music inpainting algorithms, (e) novel inpainting algorithm for folk music, (f) novel paradigms for A.I.-assisted music composition and live performance, (g) integration in professional music production environments.
A machine learning environment to determine novel malaria policies
Oliver Bent · Sekou Remy · Nelson Bore
The research and development of new tools and strategies in the fight against malaria, already uses resources, data and computation spread across innumerable institutions and individuals. Whether this is towards an objective such as drug discovery or informing intervention policy, they present common requirements. Such threads may be interwoven to achieve common goals towards malaria eradication. This unifying influence may be the technology of Artificial Intelligence (AI), helping to tie together different efforts, necessitating Novel Exploration Techniques for scientific discovery and an Infrastructure for Research at Scale.