Timezone: »

 
Workshop
CiML 2019: Machine Learning Competitions for All
Adrienne Mendrik · Wei-Wei Tu · Wei-Wei Tu · Isabelle Guyon · Evelyne Viegas · Ming LI

Fri Dec 13 08:00 AM -- 06:00 PM (PST) @ West 215 + 216
Event URL: http://ciml.chalearn.org/ »

Challenges in machine learning and data science are open online competitions that address problems by providing datasets or simulated environments. They measure the performance of machine learning algorithms with respect to a given problem. The playful nature of challenges naturally attracts students, making challenges a great teaching resource. However, in addition to the use of challenges as educational tools, challenges have a role to play towards a better democratization of AI and machine learning. They function as cost effective problem-solving tools and a means of encouraging the development of re-usable problem templates and open-sourced solutions. However, at present, the geographic, sociological repartition of challenge participants and organizers is very biased. While recent successes in machine learning have raised much hopes, there is a growing concern that the societal and economical benefits might increasingly be in the power and under control of a few.

CiML (Challenges in Machine Learning) is a forum that brings together workshop organizers, platform providers, and participants to discuss best practices in challenge organization and new methods and application opportunities to design high impact challenges. Following the success of previous years' workshops, we will reconvene and discuss new opportunities for broadening our community.

For this sixth edition of the CiML workshop at NeurIPS our objective is twofold: (1) We aim to enlarge the community, fostering diversity in the community of participants and organizers; (2) We aim to promote the organization of challenges for the benefit of more diverse communities.

The workshop provides room for discussion on these topics and aims to bring together potential partners to organize such challenges and stimulate "machine learning for good", i.e. the organization of challenges for the benefit of society. We have invited prominent speakers that have experience in this domain.

Fri 8:00 a.m. - 8:15 a.m. [iCal]
Welcome and Opening Remarks (Opening)
Adrienne Mendrik, Wei-Wei Tu, Isabelle Guyon, Evelyne Viegas, Ming LI
Fri 8:15 a.m. - 9:00 a.m. [iCal]

"AI for Good" efforts (e.g., applications work in sustainability, education, health, financial inclusion, etc.) have demonstrated the capacity to simultaneously advance intelligent system research and the greater good. Unfortunately, the majority of research that could find motivation in real-world "good" problems still center on problems with industrial or toy problem performance baselines.

Competitions can serve as an important shaping reward for steering academia towards research that is simultaneously impactful on our state of knowledge and the state of the world. This talk covers three aspects of AI for Good competitions. First, we survey current efforts within the AI for Good application space as a means of identifying current and future opportunities. Next we discuss how more qualitative notions of "Good" can be used as benchmarks in addition to more quantitative competition objective functions. Finally, we will provide notes on building coalitions of domain experts to develop and guide socially-impactful competitions in machine learning.

Amir Banifatemi
Fri 9:00 a.m. - 9:45 a.m. [iCal]

In a typical machine learning competition or shared task, success is measured in terms of systems' ability to reproduce gold-standard labels. The potential impact of the systems being developed on stakeholder populations, if considered at all, is studied separately from system `performance'. Given the tight train-eval cycle of both shared tasks and system development in general, we argue that making disparate impact on vulnerable populations visible in dataset and metric design will be key to making the potential for such impact present and salient to developers. We see this as an effective way to promote the development of machine learning technology that is helpful for people, especially those who have been subject to marginalization. This talk will explore how to develop such shared tasks, considering task choice, stakeholder community input, and annotation and metric design desiderata.

Joint work with Hal Daumé III, University of Maryland, Bernease Herman, University of Washington, and Brandeis Marshall, Spelman College.

Emily M. Bender
Fri 9:45 a.m. - 10:30 a.m. [iCal]
Coffee Break (Break)
Fri 10:30 a.m. - 11:15 a.m. [iCal]

The current AI landscape in Africa mainly focuses on capacity building. The ongoing efforts to strengthen the AI capacity in Africa are organized in summer schools, workshops, meetups, competitions and one long-term program at the Masters level. The main AI initiatives driving the AI capacity building agenda in Africa include a) Deep Learning Indaba, b) Data Science Africa, c) Data Science Nigeria, d) Nairobi Women in Machine Learning and Data Science, e) Zindi and f) The African Master's in Machine Intelligence (AMMI) at AIMS. The talk will summarize our experience on low participation of African AI developers at machine learning competitions and our recommendations to address the current challenges.

Dina Machuve
Fri 11:15 a.m. - 11:30 a.m. [iCal]

We present a novel format of machine learning competitions where a user submits code that generates images trained on training samples, the code then runs on Kaggle, produces dog images, and user receives scores for the performance of their generative content based on 1. quality of images, 2. diversity of images, and 3. memorization penalty. This style of competition targets the usage of Generative Adversarial Networks (GAN)[4], but is open for all generative models. Our implementation addresses overfitting by incorporating two different pre-trained neural networks, as well as two separate "ground truth" image datasets, for the public and private leaderboards. We also have an enclosed compute environment to prevent submissions of non-generated images. In this paper, we describe both the algorithmic and system design of our competition, as well as sharing our lessons learned from running this competition [6] in July 2019 with 900+ teams participating and over 37,000 submissions and their code received.

Wendy Kan, Phil Culliton
Fri 11:30 a.m. - 11:45 a.m. [iCal]

We present the results of the first edition as well as some perspective for a next potential edition of the "Learning To Run a Power Network" (L2RPN) competition to test the potential of Reinforcement Learning to solve a real-world problem of great practical importance: controlling power transportation in power grids while keeping people and equipment safe.

Benjamin Donnot
Fri 11:45 a.m. - 12:00 p.m. [iCal]

Despite recent breakthroughs, the ability of deep learning and reinforcement learning to outperform traditional approaches to control physically embodied robotic agents remains largely unproven. To help bridge this gap, we have developed the “AI Driving Olympics” (AI-DO), a competition with the objective of evaluating the state-of-the-art in machine learning and artificial intelligence for mobile robotics. Based on the simple and well specified autonomous driving and navigation environment called “Duckietown,” AI-DO includes a series of tasks of increasing complexity—from simple lane-following to fleet management. For each task, we provide tools for competitors to use in the form of simulators, data logs, code templates, baseline implementations, and low-cost access to robotic hardware. We evaluate submissions in simulation online, on standardized hardware environments, and finally at the competition events. We have held successful AI-DO competitions at NeurIPS 2018 and ICRA 2019, and will be holding AI-DO 3 at NeurIPS 2020. Together, these competitions highlight the need for better benchmarks, which are lacking in robotics, as well as improved mechanisms to bridge the gap between simulation and reality.

Matthew Walter
Fri 12:00 p.m. - 12:15 p.m. [iCal]
Conclusion on TrackML, a Particle Physics Tracking Machine Learning Challenge Combining Accuracy and Inference Speed (Talk)
David Rousseau, jean-roch vlimant
Fri 12:15 p.m. - 2:00 p.m. [iCal]

Accepted Posters

Kandinsky Patterns: An open toolbox for creating explainable machine learning challenges Heimo Muller · Andreas Holzinger

MOCA: An Unsupervised Algorithm for Optimal Aggregation of Challenge Submissions Robert Vogel · Mehmet Eren Ahsen · Gustavo A. Stolovitzky

FDL: Mission Support Challenge Luís F. Simões · Ben Day · Vinutha M. Shreenath · Callum Wilson

From data challenges to collaborative gig science. Coopetitive research process and platform Andrey Ustyuzhanin · Mikhail Belous · Leyla Khatbullina · Giles Strong

Smart(er) Machine Learning for Practitioners Prabhu Pradhan

Improving Reproducibility of Benchmarks Xavier Bouthillier

Guaranteeing Reproducibility in Deep Learning Competitions Brandon Houghton

Organizing crowd-sourced AI challenges in enterprise environments: opportunities and challenges Mahtab Mirmomeni · Isabell Kiral · Subhrajit Roy · Todd Mummert · Alan Braz · Jason Tsay · Jianbin Tang · Umar Asif · Thomas Schaffter · Eren Mehmet · Bruno De Assis Marques · Stefan Maetschke · Rania Khalaf · Michal Rosen-Zvi · John Cohn · Gustavo Stolovitzky · Stefan Harrer

WikiCities: a Feature Engineering Educational Resource Pablo Duboue

Reinforcement Learning Meets Information Seeking: Dynamic Search Challenge Zhiwen Tang · Grace Hui Yang

AI Journey 2019: School Tests Solving Competition Alexey Natekin · Peter Romov · Valentin Malykh

A BIRDSAI View for Conservation Elizabeth Bondi · Milind Tambe · Raghav Jain · Palash Aggrawal · Saket Anand · Robert Hannaford · Ashish Kapoor · Jim Piavis · Shital Shah · Lucas Joppa · Bistra Dilkina

Gustavo Stolovitzky, Prabhu Pradhan, Pablo Duboue, Zhiwen Tang, Aleksei Natekin, Elizabeth Bondi, Xavier Bouthillier, Stephanie Milani, Heimo Müller, Andreas T. Holzinger, Stefan Harrer, Ben Day, Andrey Ustyuzhanin, William Guss, Mahtab Mirmomeni
Fri 2:00 p.m. - 2:45 p.m. [iCal]

The typical setup in machine learning competitions is to provide one or more datasets and a performance metric, leaving it entirely up to participants which approach to use, how to engineer better features, whether and how to pretrain models on related data, how to tune hyperparameters, how to combine multiple models in an ensemble, etc. The fact that work on each of these components often leads to substantial improvements has several consequences: (1) amongst several skilled teams, the one with the most manpower and engineering drive often wins; (2) it is often unclear why one entry performs better than another one; and (3) scientific insights remain limited.

Based on my experience in both participating in several challenges and also organizing some, I will propose a new competition design that instead emphasizes scientific insight by dividing the various ways in which teams could improve performance into (largely orthogonal) modular components, each of which defines its own competition. E.g., one could run a competition focussing only on effective hyperparameter tuning of a given pipeline (across private datasets). With the same code base and datasets, one could likewise run a competition focussing only on finding better neural architectures, or only better preprocessing methods, or only a better training pipeline, or only better pre-training methods, etc. One could also run multiple of these competitions in parallel, hot-swapping better components found in one competition into the other competitions. I will argue that the result would likely be substantially more valuable in terms of scientific insights than traditional competitions and may even lead to better final performance.

Frank Hutter
Fri 2:45 p.m. - 3:00 p.m. [iCal]

Over the past few years, we have explored the benefits of involving students both in organizing and in participating in challenges as a pedagogical tool, as part of an international collaboration. Engaging in the design and resolution of a competition can be seen as a hands-on means of learning proper design and analysis of experiments and gaining a deeper understanding other aspects of Machine Learning. Graduate students of University Paris-Sud (Paris, France) are involved in class projects in creating a challenge end-to-end, from defining the research problem, collecting or formatting data, creating a starting kit, to implementing and testing the website. The application domains and types of data are extremely diverse: medicine, ecology, marketing, computer vision, recommendation, text processing, etc. The challenges thus created are then used as class projects of undergraduate students who have to solve them, both at University Paris-Sud, and at Rensselaer Polytechnic Institute (RPI, New York, USA), to provide rich learning experiences at scale. New this year, students are involved in creating challenges motivated by “AI for good” and will create re-usable templates to inspire others to create challenges for the benefit of humanity.

Adrien Pavao
Fri 3:00 p.m. - 3:15 p.m. [iCal]

Data competitions often rely on the physical distribution of data to challenge participants, a significant limitation given that much data is proprietary, sensitive, and often non-shareable. To address this, the DREAM Challenges has advanced a challenge framework called modelto-data (MTD), requiring participants to submit re-runnable algorithms instead of model predictions. The DREAM organization has successfully completed multiple MTD-based challenges, and is expanding this approach to unlock highly sensitive and non-distributable human data for use in biomedical data challenges.

Justin Guinney
Fri 3:15 p.m. - 3:30 p.m. [iCal]
The Deep Learning Epilepsy Detection Challenge: Design, Implementation, and Test of a New Crowd-Sourced AI Challenge Ecosystem (Talk)
Isabell Kiral
Fri 3:30 p.m. - 4:15 p.m. [iCal]
Coffee Break (Break)
Fri 4:15 p.m. - 6:00 p.m. [iCal]

“Open Space” is a technique for running meetings where the participants create and manage the agenda themselves. Participants can propose ideas that address the open space topic, these will be divided into various sessions that all other participants can join and brainstorm about. After the open space we will collect all the ideas and post them on the CiML website.

Adrienne Mendrik, Isabelle Guyon, Wei-Wei Tu, Evelyne Viegas, Ming LI

Author Information

Adrienne Mendrik (Netherlands eScience Center)
Wei-Wei Tu (4Paradigm Inc.)
Wei-Wei Tu (4Paradigm Inc.)
Isabelle Guyon (UPSud, INRIA, University Paris-saclay and ChaLearn)
Evelyne Viegas (Microsoft Research)
Ming LI (Nanjing University)

More from the Same Authors