Skip to yearly menu bar Skip to main content



Workshops
Bastian Rieck · Frederic Chazal · Smita Krishnaswamy · Roland Kwitt · Karthikeyan Natesan Ramamurthy · Yuhei Umeda · Guy Wolf

The last decade saw an enormous boost in the field of computational topology: methods and concepts from algebraic and differential topology, formerly confined to the realm of pure mathematics, have demonstrated their utility in numerous areas such as computational biology, personalised medicine, materials science, and time-dependent data analysis, to name a few.

The newly-emerging domain comprising topology-based techniques is often referred to as topological data analysis (TDA). Next to their applications in the aforementioned areas, TDA methods have also proven to be effective in supporting, enhancing, and augmenting both classical machine learning and deep learning models.

We believe that it is time to bring together theorists and practitioners in a creative environment to discuss the goals beyond the currently-known bounds of TDA. We want to start a conversation between experts, non-experts, and users of TDA methods to debate the next steps the field should take. We also want to disseminate methods to a broader audience and demonstrate how easy the integration of topological concepts into existing methods can be.

Important links:

- Gather.Town (for poster sessions)
- Rocket.Chat (for asking questions)
- Slack (for asking questions)

Borja Balle · James Bell · Aurélien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller

This one day workshop focuses on privacy preserving techniques for machine learning and disclosure in large scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). There is growing interest from the Machine Learning (ML) community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches listed below. Additionally, given the tension between the adoption of machine learning technologies and ethical, technical and regulatory issues about privacy, as highlighted during the COVID-19 pandemic, we invite submissions for the special track on this topic.

Jane Wang · Joaquin Vanschoren · Erin Grant · Jonathan Richard Schwarz · Francesco Visin · Jeff Clune · Roberto Calandra

How to join the virtual workshop: The 2020 Workshop on Meta-Learning will be a series of streamed pre-recorded talks + live question-and-answer (Q&A) periods, and poster sessions on Gather.Town. You can participate by:
* Accessing the livestream on our NeurIPS.cc virtual workshop page - likely this page!
* Asking questions to the speakers and panelists on Sli.do, on the MetaLearn 2020 website
* Joining the Zoom to message questions to the moderator during the panel discussion, also from the NeurIPS.cc virtual workshop page.
* Joining the poster sessions on Gather.Town (you can find the list of papers (and their virtual placement) for each session on the MetaLearn 2020 website:
* Session 1;
* Session 2;
* Session 3.
* Chatting with us and other participants on the MetaLearn 2020 Rocket.Chat!
* Entering panel discussion questions in this sli.do!


Focus of the workshop: Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to learn new tasks more efficiently, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning …

David Dao · Evan Sherwin · Priya Donti · Lauren Kuntz · Lynn Kaack · Yumna Yusuf · David Rolnick · Catherine Nakalembe · Claire Monteleoni · Yoshua Bengio

Climate change is one of the greatest problems society has ever faced, with increasingly severe consequences for humanity as natural disasters multiply, sea levels rise, and ecosystems falter. Since climate change is a complex issue, action takes many forms, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. While no silver bullet, machine learning can be an invaluable tool in fighting climate change via a wide array of applications and techniques. These applications require algorithmic innovations in machine learning and close collaboration with diverse fields and practitioners. This workshop is intended as a forum for those in the machine learning community who wish to help tackle climate change. Building on our past workshops on this topic, this workshop aims to especially emphasize the pipeline to impact, through conversations about machine learning with decision-makers and other global leaders in implementing climate change strategies. The all-virtual format of NeurIPS 2020 provides a special opportunity to foster cross-pollination between researchers in machine learning and experts in complementary fields.

Courtney Paquette · Mark Schmidt · Sebastian Stich · Quanquan Gu · Martin Takac

Optimization lies at the heart of many machine learning algorithms and enjoys great interest in our community. Indeed, this intimate relation of optimization with ML is the key motivation for the OPT series of workshops.

Looking back over the past decade, a strong trend is apparent: The intersection of OPT and ML has grown to the point that now cutting-edge advances in optimization often arise from the ML community. The distinctive feature of optimization within ML is its departure from textbook approaches, in particular, its focus on a different set of goals driven by "big-data, nonconvexity, and high-dimensions," where both theory and implementation are crucial.

We wish to use OPT 2020 as a platform to foster discussion, discovery, and dissemination of the state-of-the-art in optimization as relevant to machine learning. And well beyond that: as a platform to identify new directions and challenges that will drive future research, and continue to build the OPT+ML joint research community.

Invited Speakers
Volkan Cevher (EPFL)
Michael Friedlander (UBC)
Donald Goldfarb (Columbia)
Andreas Krause (ETH, Zurich)
Suvrit Sra (MIT)
Rachel Ward (UT Austin)
Ashia Wilson (MSR)
Tong Zhang (HKUST)

Instructions
Please join us in gather.town for all breaks and poster sessions (Click "Open Link" …

Kumar Garg · Neil Heffernan · Kayla Meyers

This workshop will explore how advances in machine learning could be applied to improve educational outcomes.

Such an exploration is timely given: the growth of online learning platforms, which have the potential to serve as testbeds and data sources; a growing pool of CS talent hungry to apply their skills towards social impact; and the chaotic shift to online learning globally during COVID-19, and the many gaps it has exposed.

The opportunities for machine learning in education are substantial, from uses of NLP to power automated feedback for the substantial amounts of student work that currently gets no review, to advances in voice recognition diagnosing errors by early readers.

Similar to the rise of computational biology, recognizing and realizing these opportunities will require a community of researchers and practitioners that are bilingual: technically adept at the cutting-edge advances in machine learning, and conversant in most pressing challenges and opportunities in education.

With representation from senior representatives from industry, academia, government, and education, this workshop is a step in that community-building process, with a focus on three things:
1. identifying what learning platforms are of a size and instrumentation that the ML community can leverage,
2. building a community of experts …

Joey Bose · Emile Mathieu · Charline Le Lan · Ines Chami · Frederic Sala · Christopher De Sa · Maximilian Nickel · Christopher Ré · Will Hamilton

Recent years have seen a surge in research at the intersection of differential geometry and deep learning, including techniques for stochastic optimization on curved spaces (e.g., hyperbolic or spherical manifolds), learning embeddings for non-Euclidean data, and generative modeling on Riemannian manifolds. Insights from differential geometry have led to new state of the art approaches to modeling complex real world data, such as graphs with hierarchical structure, 3D medical data, and meshes.
Thus, it is of critical importance to understand, from a geometric lens, the natural invariances, equivariances, and symmetries that reside within data.

In order to support the burgeoning interest of differential geometry in deep learning, the primary goal for this workshop is to facilitate community building and to work towards the identification of key challenges in comparison with regular deep learning, along with techniques to overcome these challenges. With many new researchers beginning projects in this area, we hope to bring them together to consolidate this fast-growing area into a healthy and vibrant subfield. In particular, we aim to strongly promote novel and exciting applications of differential geometry for deep learning with an emphasis on bridging theory to practice which is reflected in our choices of invited speakers, which …

Elizabeth Wood · Debora Marks · Ray Jones · Adji Bousso Dieng · Alan Aspuru-Guzik · Anshul Kundaje · Barbara Engelhardt · Chang Liu · Edward Boyden · Kresten Lindorff-Larsen · Mor Nitzan · Smita Krishnaswamy · Wouter Boomsma · Yixin Wang · David Van Valen · Orr Ashenberg

This workshop is designed to bring together trainees and experts in machine learning with those in the very forefront of biological research today for this purpose. Our full-day workshop will advance the joint project of the CS and biology communities with the goal of "Learning Meaningful Representations of Life" (LMRL), emphasizing interpretable representation learning of structure and principle. As last year, the workshop will be oriented around four layers of biological abstraction: molecule, cell, synthetic biology, and phenotypes.

Mapping structural molecular detail to organismal phenotype and function; predicting emergent effects of human genetic variation; and designing novel interventions including prevention, diagnostics, therapeutics, and the development of new synthetic biotechnologies for causal investigations are just some of the challenges that hinge on appropriate formal structures to make them accessible to the broadest possible community of computer scientists, statisticians, and their tools.

Xiao-Yang Liu · Qibin Zhao · Jacob Biamonte · Cesar F Caiafa · Paul Pu Liang · Nadav Cohen · Stefan Leichenauer

Quantum tensor networks in machine learning (QTNML) are envisioned to have great potential to advance AI technologies. Quantum machine learning promises quantum advantages (potentially exponential speedups in training, quadratic speedup in convergence, etc.) over classical machine learning, while tensor networks provide powerful simulations of quantum machine learning algorithms on classical computers. As a rapidly growing interdisciplinary area, QTNML may serve as an amplifier for computational intelligence, a transformer for machine learning innovations, and a propeller for AI industrialization.

Tensor networks, a contracted network of factor tensors, have arisen independently in several areas of science and engineering. Such networks appear in the description of physical processes and an accompanying collection of numerical techniques have elevated the use of quantum tensor networks into a variational model of machine learning. Underlying these algorithms is the compression of high-dimensional data needed to represent quantum states of matter. These compression techniques have recently proven ripe to apply to many traditional problems faced in deep learning. Quantum tensor networks have shown significant power in compactly representing deep neural networks, and efficient training and theoretical understanding of deep neural networks. More potential QTNML technologies are rapidly emerging, such as approximating probability functions, and probabilistic graphical models. However, …

Nathalie Baracaldo · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild

Classical machine learning research has been focused largely on models, optimizers, and computational challenges. As technical progress and hardware advancements ease these challenges, practitioners are now finding that the limitations and faults of their models are the result of their datasets. This is particularly true of deep networks, which often rely on huge datasets that are too large and unwieldy for domain experts to curate them by hand. This workshop addresses issues in the following areas: data harvesting, dealing with the challenges and opportunities involved in creating and labeling massive datasets; data security, dealing with protecting datasets against risks of poisoning and backdoor attacks; policy, security, and privacy, dealing with the social, ethical, and regulatory issues involved in collecting large datasets, especially with regards to privacy; and data bias, related to the potential of biased datasets to result in biased models that harm members of certain groups. Dates and details can be found at securedata.lol

Stephanie Hyland · Allen Schmaltz · Charles Onu · Ehi Nosakhare · Emily Alsentzer · Irene Y Chen · Matthew McDermott · Subhrajit Roy · Benjamin Akera · Dani Kiyasseh · Fabian Falck · Griffin Adams · Ioana Bica · Oliver J Bear Don't Walk IV · Suproteem Sarkar · Stephen Pfohl · Andrew Beam · Brett Beaulieu-Jones · Danielle Belgrave · Tristan Naumann

The application of machine learning to healthcare is often characterised by the development of cutting-edge technology aiming to improve patient outcomes. By developing sophisticated models on high-quality datasets we hope to better diagnose, forecast, and otherwise characterise the health of individuals. At the same time, when we build tools which aim to assist highly-specialised caregivers, we limit the benefit of machine learning to only those who can access such care. The fragility of healthcare access both globally and locally prompts us to ask, “How can machine learning be used to help enable healthcare for all?” - the theme of the 2020 ML4H workshop.

Participants at the workshop will be exposed to new questions in machine learning for healthcare, and be prompted to reflect on how their work sits within larger healthcare systems. Given the growing community of researchers in machine learning for health, the workshop will provide an opportunity to discuss common challenges, share expertise, and potentially spark new research directions. By drawing in experts from adjacent disciplines such as public health, fairness, epidemiology, and clinical practice, we aim to further strengthen the interdisciplinarity of machine learning for health.

See our workshop for more information: https://ml4health.github.io/

Behnam Hedayatnia · Rahul Goel · Shereen Oraby · Abigail See · Chandra Khatri · Y-Lan Boureau · Alborz Geramifard · Marilyn Walker · Dilek Hakkani-Tur

Conversational interaction systems such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana have become very popular over the recent years. Such systems have allowed users to interact with a wide variety of content on the web through a conversational interface. Research challenges such as the Dialogue System Technology Challenges, Dialogue Dodecathlon, Amazon Alexa Prize and the Vision and Language Navigation task have continued to inspire research in conversational AI. These challenges have brought together researchers from different communities such as speech recognition, spoken language understanding, reinforcement learning, language generation, and multi-modal question answering.
Unlike other popular NLP tasks, dialogue frequently has humans in the loop, whether it is for evaluation, active learning or online reward estimation. Through this workshop we aim to bring together researchers from academia and industry to discuss the challenges and opportunities in such human in the loop setups. We hope that this sparks interesting discussions about conversational agents, interactive systems, and how we can use humans most effectively when building such setups. We will highlight areas such as human evaluation setups, reliability in human evaluation, human in the loop training, interactive learning and user modeling. We also highly encourage non-English based dialogue systems in …

Luca Bertinetto · João Henriques · Samuel Albanie · Michela Paganini · Gul Varol

Machine learning research has benefited considerably from the adoption of standardised public benchmarks. In this workshop proposal, we do not argue against the importance of these benchmarks, but rather against the current incentive system and its heavy reliance upon performance as a proxy for scientific progress. The status quo incentivises researchers to “beat the state of the art”, potentially at the expense of deep scientific understanding and rigorous experimental design. Since typically only positive results are rewarded, the negative results inevitably encountered during research are often omitted, allowing many other groups to unknowingly and wastefully repeat the same negative findings. Pre-registration is a publishing and reviewing model that aims to address these issues by changing the incentive system. A pre-registered paper is a regular paper that is submitted for peer-review without any experimental results, describing instead an experimental protocol to be followed after the paper is accepted. This implies that it is important for the authors to make compelling arguments from theory or past published evidence. As for reviewers, they must assess these arguments together with the quality of the experimental design, rather than comparing numeric results. In this workshop, we propose to conduct a full pilot study in pre-registration …

Krishna Murthy Jatavallabhula · Kelsey Allen · Victoria Dean · Johanna Hansen · Shuran Song · Florian Shkurti · Liam Paull · Derek Nowrouzezahrai · Josh Tenenbaum

“Differentiable programs” are parameterized programs that allow themselves to be rewritten by gradient-based optimization. They are ubiquitous in modern-day machine learning. Recently, explicitly encoding our knowledge of the rules of the world in the form of differentiable programs has become more popular. In particular, differentiable realizations of well-studied processes such as physics, rendering, projective geometry, optimization to name a few, have enabled the design of several novel learning techniques. For example, many approaches have been proposed for unsupervised learning of depth estimation from unlabeled videos. Differentiable 3D reconstruction pipelines have demonstrated the potential for task-driven representation learning. A number of differentiable rendering approaches have been shown to enable single-view 3D reconstruction and other inverse graphics tasks (without requiring any form of 3D supervision). Differentiable physics simulators are being built to perform physical parameter estimation from video or for model-predictive control. While these advances have largely occurred in isolation, recent efforts have attempted to bridge the gap between the aforementioned areas. Narrowing the gaps between these otherwise isolated disciplines holds tremendous potential to yield new research directions and solve long-standing problems, particularly in understanding and reasoning about the 3D world.

Hence, we propose the “first workshop on differentiable computer vision, graphics, …

Abdelrahman Mohamed · Hung-yi Lee · Shinji Watanabe · Shang-Wen Li · Tara Sainath · Karen Livescu

There is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised learning utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. These approaches make it possible to use a tremendous amount of unlabeled data on the web to train large networks and solve complicated tasks. ELMo, BERT, and GPT in NLP are famous examples in this direction. Recently self-supervised approaches for speech and audio processing are also gaining attention. These approaches combine methods for utilizing no or partial labels, unpaired text and audio data, contextual text and video supervision, and signals from user interactions. Although the research direction of self-supervised learning is active in speech and audio processing, current works are limited to several problems such as automatic speech recognition, speaker identification, and speech translation, partially due to the diversity of modeling in various speech and audio processing problems. There is still much unexplored territory in the research direction for self-supervised learning.

This workshop will bring concentrated discussions on self-supervision for the field of speech and audio processing via several …

Biwei Huang · Sara Magliacane · Kun Zhang · Danielle Belgrave · Elias Bareinboim · Daniel Malinsky · Thomas Richardson · Christopher Meek · Peter Spirtes · Bernhard Schölkopf

Causality is a fundamental notion in science and engineering, and one of the fundamental problems in the field is how to find the causal structure or the underlying causal model. For instance, one focus of this workshop is on causal discovery, i.e., how can we discover causal structure over a set of variables from observational data with automated procedures? Another area of interest is how a causal perspective may help understand and solve advanced machine learning problems.

Recent years have seen impressive progress in theoretical and algorithmic developments of causal discovery from various types of data (e.g., from i.i.d. data, under distribution shifts or in nonstationary settings, under latent confounding or selection bias, or with missing data), as well as in practical applications (such as in neuroscience, climate, biology, and epidemiology). However, many practical issues, including confounding, the large scale of the data, the presence of measurement error, and complex causal mechanisms, are still to be properly addressed, to achieve reliable causal discovery in practice.

Moreover, causality-inspired machine learning (in the context of transfer learning, reinforcement learning, deep learning, etc.) leverages ideas from causality to improve generalization, robustness, interpretability, and sample efficiency and is attracting more and more …

Anima Anandkumar · Kyle Cranmer · Shirley Ho · Mr. Prabhat · Lenka Zdeborová · Atilim Gunes Baydin · Juan Carrasquilla · Adji Bousso Dieng · Karthik Kashinath · Gilles Louppe · Brian Nord · Michela Paganini · Savannah Thais

Machine learning methods have had great success in learning complex representations that enable them to make predictions about unobserved data. Physical sciences span problems and challenges at all scales in the universe: from finding exoplanets in trillions of sky pixels, to finding machine learning inspired solutions to the quantum many-body problem, to detecting anomalies in event streams from the Large Hadron Collider. Tackling a number of associated data-intensive tasks including, but not limited to, segmentation, 3D computer vision, sequence modeling, causal reasoning, and efficient probabilistic inference are critical for furthering scientific discovery. In addition to using machine learning models for scientific discovery, the ability to interpret what a model has learned is receiving an increasing amount of attention.

In this targeted workshop, we would like to bring together computer scientists, mathematicians and physical scientists who are interested in applying machine learning to various outstanding physical problems, in particular in inverse problems and approximating physical processes; understanding what the learned model really represents; and connecting tools and insights from physical sciences to the study of machine learning models. In particular, the workshop invites researchers to contribute papers that demonstrate cutting-edge progress in the application of machine learning techniques to real-world problems …

Tara Chklovski · Adrienne Mendrik · Amir Banifatemi · Gustavo Stolovitzky

For the eighth edition of the CiML (Challenges in Machine Learning) workshop at NeurIPS, our goals are to: 1) Increase diversity in the participant community in order to increase quality of model predictions; 2) Identify and share best practices in building AI capability in vulnerable communities; 3) Celebrate pioneers from these communities who are modeling lifelong learning, curiosity and courage in learning how to use ML to address critical problems in their communities.

The workshop will provide concrete recommendations to the ML community on designing and implementing competitions that are more accessible to a broader public, and more effective in building long-term AI/ML capability.

The workshop will feature keynote speakers from ML, behavioral science and gender and development, interspersed with small group discussions around best practices in implementing ML competitions. We will invite submissions of 2-page extended abstracts on topics relating to machine learning competitions, with a special focus on methods of creating diverse datasets, strategies for addressing behavioral barriers to participation in ML competitions from underrepresented communities, and strategies for measuring the long-term impact of participation in an ML competition.

Suzanne Kite · Mattie Tesfaldet · J Khadijah Abdurahman · William Agnew · Elliot Creager · Agata Foryciarz · Raphael Gontijo Lopes · Pratyusha Kalluri · Marie-Therese Png · Manuel Sabin · Maria Skoularidou · Ramon Vilarino · Rose Wang · Sayash Kapoor · Micah Carroll

It has become increasingly clear in the recent years that AI research, far from producing neutral tools, has been concentrating power in the hands of governments and companies and away from marginalized communities. Unfortunately, NeurIPS has lacked a venue explicitly dedicated to understanding and addressing the root of these problems. As Black feminist scholar Angela Davis famously said, "Radical simply means grasping things at the root." Resistance AI exposes the root problem of AI to be how technology is used to rearrange power in the world. AI researchers engaged in Resistance AI both resist AI that centralizes power into the hands of the few and dream up and build human/AI systems that put power in the hands of the people. This workshop will enable AI researchers in general, researchers engaged in Resistance AI, and marginalized communities in particular to reflect on AI-fueled inequity and co-create tactics for how to address this issue in our own work.

Logistics:
We will use the main/webinar Zoom + livestream for most events, with interactive events taking place on a separate auxiliary/breakout Zoom or gather.town. Please see our workshop site for details: https://sites.google.com/view/resistance-ai-neurips-20/schedule
See also our welcome doc here for further detail, including community guidelines …

Masha Itkina · Alex Bewley · Roberto Calandra · Igor Gilitschenski · Julien PEREZ · Ransalu Senanayake · Markus Wulfmeier · Vincent Vanhoucke

In the proposed workshop, we aim to discuss the challenges and opportunities for machine learning research in the context of physical systems. This discussion involves the presentation of recent methods and the experiences made during the deployment on real-world platforms. Such deployment requires a significant degree of generalization. Namely, the real world is vastly more complex and diverse compared to fixed curated datasets and simulations. Deployed machine learning models must scale to this complexity, be able to adapt to novel situations, and recover from mistakes. Moreover, the workshop aims to strengthen further the ties between the robotics and machine learning communities by discussing how their respective recent directions result in new challenges, requirements, and opportunities for future research.

Following the success of previous robot learning workshops at NeurIPS, the goal of this workshop is to bring together a diverse set of scientists at various stages of their careers and foster interdisciplinary communication and discussion.
In contrast to the previous robot learning workshops which focused on applications in robotics for machine learning, this workshop extends the discussion on how real-world applications within the context of robotics can trigger various impactful directions for the development of machine learning. For a more engaging …

Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi

Learning-based methods, and in particular deep neural networks, have emerged as highly successful and universal tools for image and signal recovery and restoration. They achieve state-of-the-art results on tasks ranging from image denoising, image compression, and image reconstruction from few and noisy measurements. They are starting to be used in important imaging technologies, for example in GEs newest computational tomography scanners and in the newest generation of the iPhone.

The field has a range of theoretical and practical questions that remain unanswered. In particular, learning and neural network-based approaches often lack the guarantees of traditional physics-based methods. Further, while superior on average, learning-based methods can make drastic reconstruction errors, such as hallucinating a tumor in an MRI reconstruction or turning a pixelated picture of Obama into a white male.

This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network-based approaches for solving inverse problems in the imaging sciences and beyond. NeurIPS, with its visibility and attendance by experts in machine learning, offers the ideal frame for this exchange of ideas. We will use this virtual format to make this topic accessible to a broader audience …

Rowan McAllister · Xinshuo Weng · Daniel Omeiza · Nick Rhinehart · Fisher Yu · German Ros · Vladlen Koltun

Welcome to the NeurIPS 2020 Workshop on Machine Learning for Autonomous Driving!

Autonomous vehicles (AVs) offer a rich source of high-impact research problems for the machine learning (ML) community; including perception, state estimation, probabilistic modeling, time series forecasting, gesture recognition, robustness guarantees, real-time constraints, user-machine communication, multi-agent planning, and intelligent infrastructure. Further, the interaction between ML subfields towards a common goal of autonomous driving can catalyze interesting inter-field discussions that spark new avenues of research, which this workshop aims to promote. As an application of ML, autonomous driving has the potential to greatly improve society by reducing road accidents, giving independence to those unable to drive, and even inspiring younger generations with tangible examples of ML-based technology clearly visible on local streets.

All are welcome to submit and/or attend! This will be the 5th NeurIPS workshop in this series. Previous workshops in 2016, 2017, 2018 and 2019 enjoyed wide participation from both academia and industry.

Hugo Jair Escalante · Katja Hofmann

First session for the competition program at NeurIPS2020.

Machine learning competitions have grown in popularity and impact over the last decade, emerging as an effective means to advance the state of the art by posing well-structured, relevant, and challenging problems to the community at large. Motivated by a reward or merely the satisfaction of seeing their machine learning algorithm reach the top of a leaderboard, practitioners innovate, improve, and tune their approach before evaluating on a held-out dataset or environment. The competition track of NeurIPS has matured in 2020, its fourth year, with a considerable increase in both the number of challenges and the diversity of domains and topics. A total of 16 competitions are featured this year as part of the track, with 8 competitions associated to each of the two days. The list of competitions that ar part of the program are available here:

https://neurips.cc/Conferences/2020/CompetitionTrack

Daria Baidakova · Fabio Casati · Alexey Drutsa · Dmitry Ustalov

Despite the obvious advantages, automation driven by machine learning and artificial intelligence carries pitfalls for the lives of millions of people: disappearance of many well-established mass professions and consumption of labeled data that are produced by humans managed by out of time approach with full-time office work and pre-planned task types. Crowdsourcing methodology can be considered as an effective way to overcome these issues since it provides freedom for task executors in terms of place, time and which task type they want to work on. However, many potential participants of crowdsourcing processes hesitate to use this technology due to a series of doubts (that have not been removed during the past decade).

This workshop brings together people studying research questions on

(a) quality and effectiveness in remote crowd work;
(b) fairness and quality of life at work, tackling issues such as fair task assignment, fair work conditions, and on providing opportunities for growth; and
(c) economic mechanisms that incentivize quality and effectiveness for requester while maintaining a high level of quality and fairness for crowd performers (also known as workers).

Because quality, fairness and opportunities for crowd workers are central to our workshop, we will invite a diverse group of …

William Agnew · Rim Assouel · Michael Chang · Antonia Creswell · Eliza Kosoy · Aravind Rajeswaran · Sjoerd van Steenkiste

Recent advances in deep reinforcement learning and robotics have enabled agents to achieve superhuman performance on a variety of challenging games and learn complex manipulation tasks. While these results are very promising, several open problems remain. In order to function in real-world environments, learned policies must be both robust to input perturbations and be able to rapidly generalize or adapt to novel situations. Moreover, to collaborate and live with humans in these environments, the goals and actions of embodied agents must be interpretable and compatible with human representations of knowledge. Hence, it is natural to consider how humans so successfully perceive, learn, and plan to build agents that are equally successful at solving real world tasks.
There is much evidence to suggest that objects are a core level of abstraction at which humans perceive and understand the world [8]. Objects have the potential to provide a compact, casual, robust, and generalizable representation of the world. Recently, there have been many advancements in scene representation, allowing scenes to be represented by their constituent objects, rather than at the level of pixels. While these works have shown promising results, there is still a lack of agreement on how to best represent objects, …

Senthil Kumar · Cynthia Rudin · John Paisley · Isabelle Moulinier · C. Bayan Bruss · Eren K. · Susan Tibbs · Oluwatobi Olabiyi · Simona Gandrabur · Svitlana Vyetrenko · Kevin Compher

The financial services industry has unique needs for fairness when adopting artificial intelligence and machine learning (AI/ML). First and foremost, there are strong ethical reasons to ensure that models used for activities such as credit decisioning and lending are fair and unbiased, or that machine reliance does not cause humans to miss critical pieces of data. Then there are the regulatory requirements to actually prove that the models are unbiased and that they do not discriminate against certain groups.

Emerging techniques such as algorithmic credit scoring introduce new challenges. Traditionally financial institutions have relied on a consumer’s past credit performance and transaction data to make lending decisions. But, with the emergence of algorithmic credit scoring, lenders also use alternate data such as those gleaned from social media and this immediately raises questions around systemic biases inherent in models used to understand customer behavior.

We also need to play careful attention to ways in which AI can not only be de-biased, but also how it can play an active role in making financial services more accessible to those historically shut out due to prejudice and other social injustices.

The aim of this workshop is to bring together researchers from different disciplines …

Chhavi Yadav · Prabhu Pradhan · Jesse Dodge · Mayoore Jaiswal · Peter Henderson · Abhishek Gupta · Ryan Lowe · Jessica Forde · Joelle Pineau

The exponential growth of AI research has led to several papers floating on arxiv, making it difficult to review existing literature. Despite the huge demand, the proportion of survey & analyses papers published is very low due to reasons like lack of a venue and incentives. Our Workshop, ML-RSA provides a platform and incentivizes writing such types of papers. It meets the need of taking a step back, looking at the sub-field as a whole and evaluating actual progress. We will accept 3 types of papers: broad survey papers, meta-analyses, and retrospectives. Survey papers will mention and cluster different types of approaches, provide pros and cons, highlight good source code implementations, applications and emphasize impactful literature. We expect this type of paper to provide a detailed investigation of the techniques and link together themes across multiple works. The main aim of these will be to organize techniques and lower the barrier to entry for newcomers. Meta-Analyses, on the other hand, are forward-looking, aimed at providing critical insights on the current state-of-affairs of a sub-field and propose new directions based on them. These are expected to be more than just an ablation study -- though an empirical analysis is encouraged as …

Pieter Abbeel · Chelsea Finn · Joelle Pineau · David Silver · Satinder Singh · Coline Devin · Misha Laskin · Kimin Lee · Janarthanan Rajendran · Vivek Veeriah

In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multiagent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.

Veronika Thost · Kartik Talamadupula · Vivek Srikumar · Chenwei Zhang · Josh Tenenbaum

Machine learning (ML) has seen a tremendous amount of recent success and has been applied in a variety of applications. However, it comes with several drawbacks, such as the need for large amounts of training data and the lack of explainability and verifiability of the results. In many domains, there is structured knowledge (e.g., from electronic health records, laws, clinical guidelines, or common sense knowledge) which can be leveraged for reasoning in an informed way (i.e., including the information encoded in the knowledge representation itself) in order to obtain high quality answers. Symbolic approaches for knowledge representation and reasoning (KRR) are less prominent today - mainly due to their lack of scalability - but their strength lies in the verifiable and interpretable reasoning that can be accomplished. The KR2ML workshop aims at the intersection of these two subfields of AI. It will shine a light on the synergies that (could/should) exist between KRR and ML, and will initiate a discussion about the key challenges in the field.

Byoung-Tak Zhang · Gary Marcus · Angelo Cangelosi · Pia Knoeferle · Klaus Obermayer · David Vernon · Chen Yu

Deep neural network models have shown remarkable performance in tasks such as visual object recognition, speech recognition, and autonomous robot control. We have seen continuous improvements throughout the years which have led to these models surpassing human performance in a variety of tasks such as image classification, video games, and board games. However, the performance of deep learning models heavily relies on a massive amount of data, which requires huge time and effort to collect and label them.

Recently, to overcome these weaknesses and limitations, attention has shifted towards machine learning paradigms such as semi-supervised learning, incremental learning, and meta-learning which aim to be more data-efficient. However, these learning models still require a huge amount of data to achieve high performance on real-world problems. There has been only a few achievement or breakthrough, especially in terms of the ability to grasp abstract concepts and to generalize problems.

In contrast, human babies gradually make sense of the environment through their experiences, a process known as learning by doing, without a large amount of labeled data. They actively engage with their surroundings and explore the world through their own interactions. They gradually acquire the abstract concept of objects and develop the ability …

Stephan Zheng · Alexander Trott · Annie Liang · Jamie Morgenstern · David Parkes · Nika Haghtalab

www.mlforeconomicpolicy.com
mlforeconomicpolicy.neurips2020@gmail.com

The goal of this workshop is to inspire and engage a broad interdisciplinary audience, including computer scientists, economists, and social scientists, around topics at the exciting intersection of economics, public policy, and machine learning. We feel that machine learning offers enormous potential to transform our understanding of economics, economic decision making, and public policy, and yet its adoption by economists and social scientists remains nascent.

We want to use the workshop to expose some of the critical socio-economic issues that stand to benefit from applying machine learning, expose underexplored economic datasets and simulations, and identify machine learning research directions that would have significant positive socio-economic impact. In effect, we aim to accelerate the use of machine learning to rapidly develop, test, and deploy fair and equitable economic policies that are grounded in representative data.

For example, we would like to explore questions around whether machine learning can be used to help with the development of effective economic policy, to understand economic behavior through granular, economic data sets, to automate economic transactions for individuals, and how we can build rich and faithful simulations of economic systems with strategic agents. We would like to develop economic policies and mechanisms that …

Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz

Black-box machine learning models have gained widespread deployment in decision-making settings across many parts of society, from sentencing decisions to medical diagnostics to loan lending. However, many models were found to be biased against certain demographic groups. Initial work on Algorithmic fairness focused on formalizing statistical measures of fairness, that could be used to train new classifiers. While these models were an important first step towards addressing fairness concerns, there were immediate challenges with them. Causality has recently emerged as a powerful tool to address these shortcomings. Causality can be seen as a model-first approach: starting with the language of structural causal models or potential outcomes, the idea is to frame, then solve questions of algorithmic fairness in this language. Such causal definitions of fairness can have far-reaching impact, especially in high risk domains. Interpretability on the other hand can be viewed as a user-first approach: can the ways in which algorithms work be made more transparent, making it easier for them to align with our societal values on fairness? In this way, Interpretability can sometimes be more actionable than Causality work.

Given these initial successes, this workshop aims to more deeply investigate how open questions in algorithmic fairness can …

Jonas Teuwen · Qi Dou · Ben Glocker · Ipek Oguz · Aasa Feragen · Hervé Lombaert · Ender Konukoglu · Marleen de Bruijne

'Medical Imaging meets NeurIPS' is a satellite workshop established in 2017. The workshop aims to bring researchers together from the medical image computing and machine learning communities. The objective is to discuss the major challenges in the field and opportunities for joining forces. This year the workshop will feature online oral and poster sessions with an emphasis on audience interactions. In addition, there will be a series of high-profile invited speakers from industry, academia, engineering and medical sciences giving an overview of recent advances, challenges, latest technology and efforts for sharing clinical data.

Medical imaging is facing a major crisis with an ever increasing complexity and volume of data and immense economic pressure. The interpretation of medical images pushes human abilities to the limit with the risk that critical patterns of disease go undetected. Machine learning has emerged as a key technology for developing novel tools in computer aided diagnosis, therapy and intervention. Still, progress is slow compared to other fields of visual recognition which is mainly due to the domain complexity and constraints in clinical applications which require most robust, accurate, and reliable solutions. The workshop aims to raise the awareness of the unmet needs in machine learning for …

Marin Vlastelica · Jialin Song · Aaron Ferber · Brandon Amos · Georg Martius · Bistra Dilkina · Yisong Yue

We propose to organize a workshop on machine learning and combinatorial algorithms. The combination of methods from machine learning and classical AI is an emerging trend. Many researchers have argued that “future AI” methods somehow need to incorporate discrete structures and symbolic/algorithmic reasoning. Additionally, learning-augmented optimization algorithms can impact the broad range of difficult but impactful optimization settings. Coupled learning and combinatorial algorithms have the ability to impact real-world settings such as hardware & software architectural design, self-driving cars, ridesharing, organ matching, supply chain management, theorem proving, and program synthesis among many others. We aim to present diverse perspectives on the integration of machine learning and combinatorial algorithms.

This workshop aims to bring together academic and industrial researchers in order to describe recent advances and build lasting communication channels for the discussion of future research directions pertaining the integration of machine learning and combinatorial algorithms. The workshop will connect researchers with various relevant backgrounds, such as those working on hybrid methods, have particular expertise in combinatorial algorithms, work on problems whose solution likely requires new approaches, as well as everyone interested in learning something about this emerging field of research. We aim to highlight open problems in bridging the gap …

Tejumade Afonja · Konstantin Klemmer · Niveditha Kalavakonda · Oluwafemi Azeez · Aya Salama · Paula Rodriguez Diaz

A few months ago, the world was shaken by the outbreak of the novel Coronavirus, exposing the lack of preparedness for such a case in many nations around the globe. As we watched the daily number of cases of the virus rise exponentially, and governments scramble to design appropriate policies, communities collectively asked “Could we have been better prepared for this?” Similar questions have been brought up by the climate emergency the world is now facing.
At a time of global reckoning, this year’s ML4D program will focus on building and improving resilience in developing regions through machine learning. Past iterations of the workshop have explored how machine learning can be used to tackle global development challenges, the potential benefits of such technologies, as well as the associated risks and shortcomings. This year we seek to ask our community to go beyond solely tackling existing problems by building machine learning tools with foresight, anticipating application challenges, and providing sustainable, resilient systems for long-term use.
This one-day workshop will bring together a diverse set of participants from across the globe. Attendees will learn about how machine learning tools can help enhance preparedness for disease outbreaks, address the climate crisis, and improve …

Raymond Chua · Feryal Behbahani · Julie J Lee · Sara Zannone · Rui Ponte Costa · Blake Richards · Ida Momennejad · Doina Precup

Reinforcement learning (RL) algorithms learn through rewards and a process of trial-and-error. This approach is strongly inspired by the study of animal behaviour and has led to outstanding achievements. However, artificial agents still struggle with a number of difficulties, such as learning in changing environments and over longer timescales, states abstractions, generalizing and transferring knowledge. Biological agents, on the other hand, excel at these tasks. The first edition of our workshop last year brought together leading and emerging researchers from Neuroscience, Psychology and Machine Learning to share how neural and cognitive mechanisms can provide insights for RL research and how machine learning advances can further our understanding of brain and behaviour. This year, we want to build on the success of our previous workshop, by expanding on the challenges that emerged and extending to novel perspectives. The problem of state and action representation and abstraction emerged quite strongly last year, so this year’s program aims to add new perspectives like hierarchical reinforcement learning, structure learning and their biological underpinnings. Additionally, we will address learning over long timescales, such as lifelong learning or continual learning, by including views from synaptic plasticity and developmental neuroscience. We are hoping to inspire and further …

Jessica Forde · Francisco Ruiz · Melanie Fernandez Pradier · Aaron Schein · Finale Doshi-Velez · Isabel Valera · David Blei · Hanna Wallach

We’ve all been there. A creative spark leads to a beautiful idea. We love the idea, we nurture it, and name it. The idea is elegant: all who hear it fawn over it. The idea is justified: all of the literature we have read supports it. But, lo and behold: once we sit down to implement the idea, it doesn’t work. We check our code for software bugs. We rederive our derivations. We try again and still, it doesn’t work. We Can’t Believe It’s Not Better [1].

In this workshop, we will encourage probabilistic machine learning researchers who Can’t Believe It’s Not Better to share their beautiful idea, tell us why it should work, and hypothesize why it does not in practice. We also welcome work that highlights pathologies or unexpected behaviors in well-established practices. This workshop will stress the quality and thoroughness of the scientific procedure, promoting transparency, deeper understanding, and more principled science.

Focusing on the probabilistic machine learning community will facilitate this endeavor, not only by gathering experts that speak the same language, but also by exploiting the modularity of probabilistic framework. Probabilistic machine learning separates modeling assumptions, inference, and model checking into distinct phases [2]; this …

Alex Beatson · Priya Donti · Amira Abdel-Rahman · Stephan Hoyer · Rose Yu · J. Zico Kolter · Ryan Adams

For full details see: https://ml4eng.github.io/

For questions, issues, and on-the-day help, email: ml4eng2020@gmail.com

gather.town link for poster sessions and breaks: https://neurips.gather.town/app/D2n0HkRXoVlgUSWV/ML4Eng-NeurIPS20

Modern engineering workflows are built on computational tools for specifying models and designs, for numerical analysis of system behavior, and for optimization, model-fitting and rational design. How can machine learning be used to empower the engineer and accelerate this workflow? We wish to bring together machine learning researchers and engineering academics to address the problem of developing ML tools which benefit engineering modeling, simulation and design, through reduction of required computational or human effort, through permitting new rich design spaces, through enabling production of superior designs, or through enabling new modes of interaction and new workflows.

Luba Elliott · Sander Dieleman · Adam Roberts · Tom White · Daphne Ippolito · Holly Grimm · Mattie Tesfaldet · Samaneh Azadi

Generative machine learning and machine creativity have continued to grow and attract a wider audience to machine learning. Generative models enable new types of media creation across images, music, and text - including recent advances such as StyleGAN2, Jukebox and GPT-3. This one-day workshop broadly explores issues in the applications of machine learning to creativity and design. We will look at algorithms for generation and creation of new media, engaging researchers building the next generation of generative models (GANs, RL, etc). We investigate the social and cultural impact of these new models, engaging researchers from HCI/UX communities and those using machine learning to develop new creative tools. In addition to covering the technical advances, we also address the ethical concerns ranging from the use of biased datasets to replicating artistic work. Finally, we’ll hear from some of the artists and musicians who are adopting machine learning including deep learning and reinforcement learning as part of their own artistic process. We aim to balance the technical issues and challenges of applying the latest generative models to creativity and design with philosophical and cultural issues that surround this area of research.

Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach

https://www.CooperativeAI.com/

Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.

We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design …

Carolyn Ashurst · Rosie Campbell · Deborah Raji · Solon Barocas · Stuart Russell

Following growing concerns with both harmful research impact and research conduct in computer science, including concerns with research published at NeurIPS, this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions.

These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research, and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice. The changes have been met with both praise and criticism some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.

This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research …

José Miguel Hernández-Lobato · Matt Kusner · Brooks Paige · Marwin Segler · Jennifer Wei

Discovering new molecules and materials is a central pillar of human well-being, providing new medicines, securing the world’s food supply via agrochemicals, or delivering new battery or solar panel materials to mitigate climate change. However, the discovery of new molecules for an application can often take up to a decade, with costs spiraling. Machine learning can help to accelerate the discovery process. The goal of this workshop is to bring together researchers interested in improving applications of machine learning for chemical and physical problems and industry experts with practical experience in pharmaceutical and agricultural development. In a highly interactive format, we will outline the current frontiers and present emerging research directions. We aim to use this workshop as an opportunity to establish a common language between all communities, to actively discuss new research problems, and also to collect datasets by which novel machine learning models can be benchmarked. The program is a collection of invited talks, alongside contributed posters. A panel discussion will provide different perspectives and experiences of influential researchers from both fields and also engage open participant conversation. An expected outcome of this workshop is the interdisciplinary exchange of ideas and initiation of collaboration.

Mateusz Malinowski · Grzegorz Swirszcz · Viorica Patraucean · Marco Gori · Yanping Huang · Sindy Löwe · Anna Choromanska

Is backpropagation the ultimate tool on the path to achieving synthetic intelligence as its success and widespread adoption would suggest?

Many have questioned the biological plausibility of backpropagation as a learning mechanism since its discovery. The weight transport and timing problems are the most disputable. The same properties of backpropagation training also have practical consequences. For instance, backpropagation training is a global and coupled procedure that limits the amount of possible parallelism and yields high latency.

These limitations have motivated us to discuss possible alternative directions. In this workshop, we want to promote such discussions by bringing together researchers from various but related disciplines, and to discuss possible solutions from engineering, machine learning and neuroscientific perspectives.

Rumi Chunara · Abraham Flaxman · Daniel Lizotte · Chirag Patel · Laura Rosella

Public health and population health refer to the study of daily life factors and prevention efforts, and their effects on the health of populations. We expect that work featured in this workshop will differ from Machine Learning in Healthcare as it will focus on data and algorithms related to the non-medical conditions that shape our health including structural, lifestyle, policy, social, behavior and environmental factors. Indeed, much of the data that is traditionally used in machine learning and health problems are really about our interactions with the health care system, and this workshop aims to balance this with machine learning work using data on the non-medical conditions that shape our health. There are many machine learning opportunities specific to these data and how they are used to assess and understand health and disease, that differ from healthcare specific data and tasks (e.g. the data is often unstructured, must be captured across the life-course, in different environments, etc.) This is pertinent for both infectious diseases such as COVID-19 and non-communicable diseases such as diabetes, stroke, etc. Indeed, this workshop topic is especially timely given the COVID outbreak, protests regarding racism, and associated interest in exploring relevance of machine learning to questions …

Prithviraj Ammanabrolu · Matthew Hausknecht · Xingdi Yuan · Marc-Alexandre Côté · Adam Trischler · Kory Mathewson @korymath · John Urbanek · Jason Weston · Mark Riedl

This workshop will focus on exploring the utility of interactive narratives to fill a role as the learning environments of choice for language-based tasks including but not limited to storytelling. A previous iteration of this workshop took place very successfully with over a hundred attendees, also at NeurIPS, in 2018 and since then the community of people working in this area has rapidly increased. This workshop aims to be a centralized place where all researchers involved across a breadth of fields can interact and learn from each other. Furthermore, it will act as a showcase to the wider NLP/RL/Game communities on interactive narrative's place as a learning environment. The program will feature a collection of invited talks in addition to contributed talks and posters from each of these sections of the interactive narrative community and the wider NLP and RL communities.

Michael Lutter · Alexander Terenin · Shirley Ho · Lei Wang

Over the last decade, deep networks have propelled machine learning to accomplish tasks previously considered far out of reach, human-level performance in image classification and game-playing. However, research has also shown that the deep networks are often brittle to distributional shifts in data: it has been shown that human-imperceptible changes can lead to absurd predictions. In many application areas, including physics, robotics, social sciences and life sciences, this motivates the need for robustness and interpretability, so that deep networks can be trusted in practical applications. Interpretable and robust models can be constructed by incorporating prior knowledge within the model or learning process as an inductive bias, thereby regularizing the model, avoiding overfitting, and making the model easier to understand for scientists who are non-machine-learning experts. Already in the last few years researchers from different fields have proposed various combinations of domain knowledge and machine learning and successfully applied these techniques to various applications.

Surya Karthik Mukkavilli · Johanna Hansen · Natasha Dudek · Tom Beucler · Kelly Kochanski · Mayur Mudigonda · Karthik Kashinath · Amy McGovern · Paul D Miller · Chad Frischmann · Pierre Gentine · Gregory Dudek · Aaron Courville · Daniel Kammen · Vipin Kumar

Our workshop proposal AI for Earth sciences seeks to bring cutting edge geoscientific and planetary challenges to the fore for the machine learning and deep learning communities. We seek machine learning interest from major areas encompassed by Earth sciences which include, atmospheric physics, hydrologic sciences, cryosphere science, oceanography, geology, planetary sciences, space weather, volcanism, seismology, geo-health (i.e. water, land, air pollution, environmental epidemics), biosphere, and biogeosciences. We also seek interest in AI applied to energy for renewable energy meteorology, thermodynamics and heat transfer problems. We call for papers demonstrating novel machine learning techniques in remote sensing for meteorology and geosciences, generative Earth system modeling, and transfer learning from geophysics and numerical simulations and uncertainty in Earth science learning representations. We also seek theoretical developments in interpretable machine learning in meteorology and geoscientific models, hybrid models with Earth science knowledge guided machine learning, representation learning from graphs and manifolds in spatiotemporal models and dimensionality reduction in Earth sciences. In addition, we seek Earth science applications from vision, robotics, multi-agent systems and reinforcement learning. New labelled benchmark datasets and generative visualizations of the Earth are also of particular interest. A new area of interest is in integrated assessment models and human-centered AI …

Marie Ossenkopf · Angelos Filos · Abhinav Gupta · Michael Noukhovitch · Angeliki Lazaridou · Jakob Foerster · Kalesha Bullard · Rahma Chaabouni · Eugene Kharitonov · Roberto Dessì

Communication is one of the most impressive human abilities but historically it has been studied in machine learning mainly on confined datasets of natural language. Thanks to deep RL, emergent communication can now be studied in complex multi-agent scenarios.

Three previous successful workshops (2017-2019) have gathered the community to discuss how, when, and to what end communication emerges, producing research later published at top ML venues (e.g., ICLR, ICML, AAAI). However, many approaches to studying emergent communication rely on extensive amounts of shared training time. Our question is: Can we do that faster?

Humans interact with strangers on a daily basis. They possess a basic shared protocol, but a huge partition is nevertheless defined by the context. Humans are capable of adapting their shared protocol to ever new situations and general AI would need this capability too.

We want to explore the possibilities for artificial agents of evolving ad hoc communication spontaneously, by interacting with strangers. Since humans excel on this task, we want to start by having the participants of the workshop take the role of their agents and develop their own bots for an interactive game. This will illuminate the necessities of zero-shot communication learning in a practical …

Joseph Futoma · Walter Dempsey · Katherine Heller · Yian Ma · Nicholas Foti · Marianne Njifon · Kelly Zhang · Jieru Shi

Mobile health (mHealth) technologies have transformed the mode and quality of clinical research. Wearable sensors and mobile phones provide real-time data streams that support automated clinical decision making, allowing researchers and clinicians to provide ecological and in-the-moment support to individuals in need. Mobile health technologies are used across various health fields. Their inclusion in clinical care has aimed to improve HIV medication adherence, to increase activity, supplement counseling/pharmacotherapy in treatment for substance use, reinforce abstinence in addictions, and to support recovery from alcohol dependence. The development of mobile health technologies, however, has progressed at a faster pace than the science and methodology to evaluate their validity and efficacy.


Current mHealth technologies are limited in their ability to understand how adverse health behaviors develop, how to predict them, and how to encourage healthy behaviors. In order for mHealth to progress and have expanded impact, the field needs to facilitate collaboration among machine learning researchers, statisticians, mobile sensing researchers, human-computer interaction researchers, and clinicians. Techniques from multiple fields can be brought to bear on the substantive problems facing this interdisciplinary discipline: experimental design, causal inference, multi-modal complex data analytics, representation learning, reinforcement learning, deep learning, transfer learning, data visualization, and clinical integration. …

Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths

https://twitter.com/svrhm2020 The goal of the 2nd Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop is to disseminate relevant, parallel findings in the fields of computational neuroscience, psychology, and cognitive science that may inform modern machine learning. In the past few years, machine learning methods---especially deep neural networks---have widely permeated the vision science, cognitive science, and neuroscience communities. As a result, scientific modeling in these fields has greatly benefited, producing a swath of potentially critical new insights into the human mind. Since human performance remains the gold standard for many tasks, these cross-disciplinary insights and analytical tools may point towards solutions to many of the current problems that machine learning researchers face (e.g., adversarial attacks, compression, continual learning, and self-supervised learning). Thus we propose to invite leading cognitive scientists with strong computational backgrounds to disseminate their findings to the machine learning community with the hope of closing the loop by nourishing new ideas and creating cross-disciplinary collaborations. In particular, this year's version of the workshop will have a heavy focus on the relative roles of larger datasets and stronger inductive biases as we work on tasks that go beyond object recognition.

Ritwik Gupta · Robin Murphy · Eric Heim · Zhangyang Wang · Bryce Goodman · Nirav Patel · Piotr Bilinski · Edoardo Nemni

Natural disasters are one of the oldest threats to both individuals and the societies they co-exist in. As a result, humanity has ceaselessly sought way to provide assistance to people in need after disasters have struck. Further, natural disasters are but a single, extreme example of the many possible humanitarian crises. Disease outbreak, famine, and oppression against disadvantaged groups can pose even greater dangers to people that have less obvious solutions. In this proposed workshop, we seek to bring together the Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Response (HADR) communities in order to bring AI to bear on real-world humanitarian crises. Through this workshop, we intend to establish meaningful dialogue between the communities.

By the end of the workshop, the NeurIPS research community can come to understand the practical challenges of aiding those who are experiencing crises, while the HADR community can understand the landscape that is the state of art and practice in AI. Through this, we seek to begin establishing a pipeline of transitioning the research created by the NeurIPS community to real-world humanitarian issues.

Niki Kilbertus · Angela Zhou · Ashia Wilson · John Miller · Lily Hu · Lydia T. Liu · Nathan Kallus · Shira Mitchell

Machine learning is rapidly becoming an integral component of sociotechnical systems. Predictions are increasingly used to grant beneficial resources or withhold opportunities, and the consequences of such decisions induce complex social dynamics by changing agent outcomes and prompting individuals to proactively respond to decision rules. This introduces challenges for standard machine learning methodology. Static measurements and training sets poorly capture the complexity of dynamic interactions between algorithms and humans. Strategic adaptation to decision rules can render statistical regularities obsolete. Correlations momentarily observed in data may not be robust enough to support interventions for long-term welfaremits of traditional, static approaches to decision-making, researchers in fields ranging from public policy to computer science to economics have recently begun to view consequential decision-making through a dynamic lens. This workshop will confront the use of machine learning to make consequential decisions in dynamic environments. Work in this area sits at the nexus of several different fields, and the workshop will provide an opportunity to better understand and synthesize social and technical perspectives on these issues and catalyze conversations between researchers and practitioners working across these diverse areas.

Hugo Jair Escalante · Katja Hofmann

Second session for the competition program at NeurIPS2020.

Machine learning competitions have grown in popularity and impact over the last decade, emerging as an effective means to advance the state of the art by posing well-structured, relevant, and challenging problems to the community at large. Motivated by a reward or merely the satisfaction of seeing their machine learning algorithm reach the top of a leaderboard, practitioners innovate, improve, and tune their approach before evaluating on a held-out dataset or environment. The competition track of NeurIPS has matured in 2020, its fourth year, with a considerable increase in both the number of challenges and the diversity of domains and topics. A total of 16 competitions are featured this year as part of the track, with 8 competitions associated to each of the two days. The list of competitions that ar part of the program are available here:

https://neurips.cc/Conferences/2020/CompetitionTrack

Raphael Townshend · Stephan Eismann · Ron Dror · Ellen Zhong · Namrata Anand · John Ingraham · Wouter Boomsma · Sergey Ovchinnikov · Roshan Rao · Per Greisen · Rachel Kolodny · Bonnie Berger

Spurred on by recent advances in neural modeling and wet-lab methods, structural biology, the study of the three-dimensional (3D) atomic structure of proteins and other macromolecules, has emerged as an area of great promise for machine learning. The shape of macromolecules is intrinsically linked to their biological function (e.g., much like the shape of a bike is critical to its transportation purposes), and thus machine learning algorithms that can better predict and reason about these shapes promise to unlock new scientific discoveries in human health as well as increase our ability to design novel medicines.

Moreover, fundamental challenges in structural biology motivate the development of new learning systems that can more effectively capture physical inductive biases, respect natural symmetries, and generalize across atomic systems of varying sizes and granularities. Through the Machine Learning in Structural Biology workshop, we aim to include a diverse range of participants and spark a conversation on the required representations and learning algorithms for atomic systems, as well as dive deeply into how to integrate these with novel wet-lab capabilities.

Divyansh Kaushik · Bhargavi Paranjape · Forough Arabshahi · Yanai Elazar · Yixin Nie · Max Bartolo · Polina Kirichenko · Pontus Lars Erik Saito Stenetorp · Mohit Bansal · Zachary Lipton · Douwe Kiela

Human involvement in AI system design, development, and evaluation is critical to ensure that the insights being derived are practical, and the systems built are meaningful, reliable, and relatable to those who need them. Humans play an integral role in all stages of machine learning development, be it during data generation, interactively teaching machines, or interpreting, evaluating and debugging models. With growing interest in such “human in the loop” learning, we aim to highlight new and emerging research opportunities for the ML community that arise from the evolving needs to design evaluation and training strategies for humans and models in the loop. The specific focus of this workshop is on emerging and under-explored areas of human- and model-in-the-loop learning, such as employing humans to seek richer forms of feedback for data than labels alone, learning from dynamic adversarial data collection with humans employed to find weaknesses in models, learning from human teachers instructing computers through conversation and/or demonstration, investigating the role of humans in model interpretability, and assessing social impact of ML systems. This workshop aims to bring together interdisciplinary researchers from academia and industry to discuss major challenges, outline recent advances, and facilitate future research in these areas.

Xiaolin Andy Li · Dejing Dou · Ameet Talwalkar · Hongyu Li · Jianzong Wang · Yanzhi Wang

In the recent decade, we have witnessed rapid progress in machine learning in general and deep learning in particular, mostly driven by tremendous data. As these intelligent algorithms, systems, and applications are deployed in real-world scenarios, we are now facing new challenges, such as scalability, security, privacy, trust, cost, regulation, and environmental and societal impacts. In the meantime, data privacy and ownership has become more and more critical in many domains, such as finance, health, government, and social networks. Federated learning (FL) has emerged to address data privacy issues. To make FL practically scalable, useful, efficient, and effective on security and privacy mechanisms and policies, it calls for joint efforts from the community, academia, and industry. More challenges, interplays, and tradeoffs in scalability, privacy, and security need to be investigated in a more holistic and comprehensive manner by the community. We are expecting broader, deeper, and greater evolution of these concepts and technologies, and confluence towards holistic trustworthy AI ecosystems.

This workshop provides an open forum for researchers, practitioners, and system builders to exchange ideas, discuss, and shape roadmaps towards scalable and privacy-preserving federated learning in particular, and scalable and trustworthy AI ecosystems in general.

Augustus Odena · Charles Sutton · Nadia Polikarpova · Josh Tenenbaum · Armando Solar-Lezama · Isil Dillig

There are many tasks that could be automated by writing computer programs, but most people don’t know how to program computers (this is the subject of program synthesis, the study of how to automatically write programs from user specifications). Building tools for doing computer-assisted-programming could thus improve the lives of many people (and it’s also a cool research problem!). There has been substantial recent interest in the ML community in the problem of automatically writing computer programs from user specifications, as evidenced by the increased volume of Program Synthesis submissions to ICML, ICLR, and NeurIPS.

Despite this recent work, a lot of exciting questions are still open, such as how to combine symbolic reasoning over programs with deep learning, how to represent programs and user specifications, and how to apply program synthesis within computer vision, robotics, and other control problems. There is also work to be done on fusing work done in the ML community with research on Programming Languages (PL) through collaboration between the ML and PL communities, and there remains the challenge of establishing benchmarks that allow for easy comparison and measurement of progress. The aim of the CAP workshop is to address these points. This workshop will …

Daniel Mankowitz · Gabriel Dulac-Arnold · Shie Mannor · Omer Gottesman · Anusha Nagabandi · Doina Precup · Timothy A Mann · Gabriel Dulac-Arnold

Reinforcement Learning (RL) has had numerous successes in recent years in solving complex problem domains. However, this progress has been largely limited to domains where a simulator is available or the real environment is quick and easy to access. This is one of a number of challenges that are bottlenecks to deploying RL agents on real-world systems. Two recent papers identify nine important challenges that, if solved, will take a big step towards enabling RL agents to be deployed to real-world systems (Dulac et. al. 2019, 2020).The goals of this workshop are four-fold: (1) Providing a forum for researchers in academia, industry researchers as well as industry practitioners from diverse backgrounds to discuss the challenges faced in real-world systems; (2) discuss and prioritize the nine research challenges. This includes determining which challenges we should focus on next, whether any new challenges should be added to the list or existing ones removed from this list; (3) Discuss problem formulations for the various challenges and critique these formulations or develop new ones. This is especially important for more abstract challenges such as explainability. We should also be asking ourselves whether the current Markov Decision Process (MDP) formulation is sufficient for solving these …

Pengtao Xie · Shanghang Zhang · Pulkit Agrawal · Ishan Misra · Cynthia Rudin · Abdelrahman Mohamed · Wenzhen Yuan · Barret Zoph · Laurens van der Maaten · Xingyi Yang · Eric Xing

Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on images (e.g., MoCo, PIRL, SimCLR) and texts (e.g., BERT) and has shown promising results in other data modalities, including graphs, time-series, audio, etc. On a wide variety of tasks, SSL without using human-provided labels achieves performance that is close to fully supervised approaches.

The existing SSL research mostly focuses on improving the empirical performance without a theoretical foundation. While the proposed SSL approaches are empirically effective, theoretically why they perform well is not clear. For example, why certain auxiliary tasks in SSL perform better than others? How many unlabeled data examples are needed by SSL to learn a good representation? How is the performance of SSL affected by neural architectures?

In this workshop, we aim to bridge this gap between theory and practice. We bring together SSL-interested researchers from various domains to discuss the theoretical foundations of empirically well-performing SSL approaches and how the theoretical insights can further improve SSL’s empirical performance. Different from previous SSL-related workshops which focus on empirical effectiveness of SSL …

Anna Goldie · Azalia Mirhoseini · Jonathan Raiman · Martin Maas · Xinlei XU

NeurIPS 2020 Workshop on Machine Learning for Systems

Website: http://mlforsystems.org/

Submission Link: https://cmt3.research.microsoft.com/MLFS2020/Submission/Index

Important Dates:

Submission Deadline: October 9th, 2020 (AoE)
Acceptance Notifications: October 23rd, 2020
Camera-Ready Submission: November 29th, 2020
Workshop: December 12th, 2020

Call for Papers:

Machine Learning for Systems is an interdisciplinary workshop that brings together researchers in computer systems and machine learning. This workshop is meant to serve as a platform to promote discussions between researchers in these target areas.

We invite submission of up to 4-page extended abstracts in the broad area of using machine learning in the design of computer systems. We are especially interested in submissions that move beyond using machine learning to replace numerical heuristics. This year, we hope to see novel system designs, streamlined cross-platform optimization, and new benchmarks for ML for Systems.

Accepted papers will be made available on the workshop website, but there will be no formal proceedings. Authors may therefore publish their work in other journals or conferences. The workshop will include invited talks from industry and academia as well as oral and poster presentations by workshop participants.

Areas of interest:

* Supervised, unsupervised, and reinforcement learning research with applications to:
- Systems Software
- Runtime Systems
- …

Aviral Kumar · Rishabh Agarwal · George Tucker · Lihong Li · Doina Precup · Aviral Kumar

The common paradigm in reinforcement learning (RL) assumes that an agent frequently interacts with the environment and learns using its own collected experience. This mode of operation is prohibitive for many complex real-world problems, where repeatedly collecting diverse data is expensive (e.g., robotics or educational agents) and/or dangerous (e.g., healthcare). Alternatively, Offline RL focuses on training agents with logged data in an offline fashion with no further environment interaction. Offline RL promises to bring forward a data-driven RL paradigm and carries the potential to scale up end-to-end learning approaches to real-world decision making tasks such as robotics, recommendation systems, dialogue generation, autonomous driving, healthcare systems and safety-critical applications. Recently, successful deep RL algorithms have been adapted to the offline RL setting and demonstrated a potential for success in a number of domains, however, significant algorithmic and practical challenges remain to be addressed. The goal of this workshop is to bring attention to offline RL, both from within and from outside the RL community discuss algorithmic challenges that need to be addressed, discuss potential real-world applications, discuss limitations and challenges, and come up with concrete problem statements and evaluation protocols, inspired from real-world applications, for the research community to work on. …

Pratik Chaudhari · Alexander Alemi · Varun Jog · Dhagash Mehta · Frank Nielsen · Stefano Soatto · Greg Ver Steeg

Attempts at understanding deep learning have come from different disciplines, namely physics, statistics, information theory, and machine learning. These lines of investigation have very different modeling assumptions and techniques; it is unclear how their results may be reconciled together. This workshop builds upon the observation that Information Geometry has strong overlaps with these directions and may serve as a means to develop a holistic understanding of deep learning. The workshop program is designed to answer two specific questions. The first question is: how do geometry of the hypothesis class and information-theoretic properties of optimization inform generalization. Good datasets have been a key propeller of the empirical success of deep networks. Our theoretical understanding of data is however poor. The second question the workshop will focus on is: how can we model data and use the understanding of data to improve optimization/generalization in the low-data regime.

Gather.Town link: https://neurips.gather.town/app/vPYEDmTHeUbkACgf/dl-info-neurips2020