(Track2) Deep Conversational AI Q&A
NeurIPS 2020 Social ML in Korea
We invite everyone who is part of and/or interested in the ML research scene in Korea. Participants can introduce their own ML research, especially if it's part of NeurIPS 2020. They can also introduce NeurIPS 2020 papers that they find interesting and discuss them with other participants. Other possible discussion topics include (but are not limited to): Korean NLP, computer vision and datasets, ML research for COVID-19, ML for post COVID-19 era, and career options in academia/industry in Korea. We welcome everyone from anywhere in the world, as long as you can keep awake if our event falls in the middle of the night for you. Note that we had this same Social at ICLR 2020 with active participation.
(Track3) Designing Learning Dynamics Q&A
In recent years machine learning research has been dominated by optimisation-based learning methods (take gradient descent, for example, which is ubiquitous in deep learning). However, while tools that operate under this paradigm have proven to be very powerful, they are often not well suited for tackling complex challenges such as highly non-stationary targets or explicit multi-agent systems. In an attempt to overcome such limitations, some researchers are instead turning towards open-ended methods, and considering how to design the underlying learning dynamics. This tutorial discusses how different tools can be applied to construct and combine adaptive objectives for populations of learners. We begin by providing background on the problem setting, basic tools and philosophy. In a second part we then dive into the basics of evolutionary computation. In particular, we frame the development of evolutionary methods as a focus shift away from gradient-free optimisers in search of more generic and powerful tools for designing learning dynamics. Finally, we provide a more detailed overview of techniques and research around training and evaluating populations of agents.
(Track3) Deep Implicit Layers: Neural ODEs, Equilibrium Models, and Differentiable Optimization Q&A
Virtually all deep learning is built upon the notion of explicit computation: layers of a network are written in terms of their explicit step-by-step computations used to map inputs to outputs. But a rising trend in deep learning takes a different approach: implicit layers, where one instead specifies the conditions for a layer’s output to satisfy. Such architectures date back to early work on recurrent networks but have recently gained a great deal of attention as the approach behind Neural ODEs, Deep Equilibrium Models (DEQs), FFJORD, optimization layers, SVAEs, implicit meta-learning, and many other approaches. These methods can have substantial conceptual, computational, and modeling benefits: they often make it much easier to specify simple-yet-powerful architectures, can vastly reduce the memory consumption of deep networks, and allow more natural modeling of e.g. continuous-time phenomena.
This tutorial will provide a unified perspective on implicit layers, illustrating how the implicit modeling framework encompasses all the models discussed above, and providing a practical view of how to integrate such approaches into modern deep learning systems. We will cover the history and motivation of implicit layers, discuss how to solve the resulting "forward" inference problem, and then highlight how to compute gradients through such layers in the backward pass, via implicit differentiation. Throughout, we will highlight several applications of these methods in Neural ODEs, DEQs, and other settings. The tutorial will be accompanied by an interactive monograph on implicit layers: a set of interactive Colab notebooks with code in both the JAX and PyTorch libraries.
The Town Hall meeting is open to all registered attendees of NeurIPS’20 and is an opportunity to connect, ask questions and provide feedback to the NeurIPS organizers and board. We will hold two Town Hall meetings to try to accommodate as much of the global community as we can. We encourage you to submit questions in advance via townhall@neurips.cc. We will also field questions and take feedback during the meetings.
Causal Learning
Causal reasoning is important in many areas, including the sciences, decision making and public policy. The gold standard method for determining causal relationships uses randomized controlled perturbation experiments. In many settings, however, such experiments are expensive, time consuming or impossible. Hence, it is worthwhile to obtain causal information from observational data, that is, from data obtained by observing the system of interest without subjecting it to interventions. In this talk, I will discuss approaches for causal learning from observational data, paying particular attention to the combination of causal structure learning and variable selection, with the aim of estimating causal effects. Throughout, examples will be used to illustrate the concepts.
Orals & Spotlights Track 27: Unsupervised/Probabilistic
Orals & Spotlights Track 26: Graph/Relational/Theory
Orals & Spotlights Track 25: Probabilistic Models/Statistics
Orals & Spotlights Track 29: Neuroscience
Orals & Spotlights Track 30: Optimization/Theory
Orals & Spotlights Track 31: Reinforcement Learning
Orals & Spotlights Track 28: Deep Learning
Indigenous in AI
This is Indigenous in AI's inaugural event!
Indigenous In AI’s vision is to build an international community of Native, Aboriginal, and First Nations who will collectively transform their home communities with advanced technology. By elevating the voices of Indigenous ML researchers we will inspire future impactful work and break stereotypes. Additionally, this group will strive to educate the broader NeurIPS on contemporary indigenous issues relevant to information technology and practices.
(Track1) Federated Learning and Analytics: Industry Meets Academia Q&A
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. Similarly, federated analytics (FA) allows data scientists to generate analytical insight from the combined information in distributed datasets without requiring data centralization. Federated approaches embody the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
Motivated by the explosive growth in federated learning and analytics research, this tutorial will provide a gentle introduction to the area. The focus will be on cross-device federated learning, including deep dives on federated optimization and differentially privacy, but federated analytics and cross-silo federated learning will also be discussed. In addition to optimization and privacy, we will also introduce personalization, robustness, fairness, and systems challenges in the federated setting with an emphasis on open problems.
Website: (https://sites.google.com/view/fl-tutorial/home)
We sit down virtually to discuss challenges and opportunities that arise from having to start a career in ML virtually, with tips and tricks for how to approach the application as well as the virtual onboarding into a new team.
We'll be discussing together best practices for hiring and starting a career with special attention given to underrepresented groups in the AI field and the challenges/opportunities of remote work.
Women in AI Ignite
Join us for 5-minute Ignite talks by women in AI and brainstorm on actionable next steps to take to our local communities! Everyone is welcome; our speakers are women.
Space ML
We will host a series of breakouts on the following emerging areas: Physics-constrained models, Reverse image search engines / knowledge discovery Self-supervised learning, On-board / Edge computing, Digital twins, Open-source science.
Following these discussions, we will organise an interactive social experience for attendees to meet each other and foster co-opetition (cooperative competition) completing fun space related activities.
(Track3) Policy Optimization in Reinforcement Learning Q&A
This tutorial will cover policy gradients methods in reinforcement learning, with a focus on understanding foundational ideas from an optimization perspective. We will discuss the properties of the policy objective, in terms of two critical properties for convergence rates when using stochastic gradient approaches: variance and curvature. We will explain how the policy objective can be a particularly difficult optimization problem, as it can have large flat regions and stochastic samples of the gradient can be very high variance. We will first explain how to use standard tools from optimization to reduce the variance of the gradient estimate, as well as techniques to mitigate curvature issues. We will then discuss optimization improvements that leverage more knowledge about the objective, including the Markov property and how to modify the state distribution for more coverage. We will discuss how standard Actor-Critic methods with (off-policy) data re-use provide RL-specific variance reduction approaches. We will then conclude with an overview of what is known theoretically about the policy objective, where we discuss the role of entropy-regularization and exploration for mitigating curvature issues. The tutorial website is here: Home (google.com)
Timetable: Nicolas - 40 minute presentation + 10 minute Q&A Martha - 40 minute presentation + 10 minute Q&A Sham - 40 minute presentation + 10 minute Q&A
Bio and timetable on the website:(https://sites.google.com/ualberta.ca/rlandoptimization-neurips2020/home)
The Genomic Bottleneck: A Lesson from Biology
Many animals are born with impressive innate capabilities. At birth, a spider can build a web, a colt can stand, and a whale can swim. From an evolutionary perspective, it is easy to see how innate abilities could be selected for: Those individuals that can survive beyond their most vulnerable early hours, days or weeks are more likely to survive until reproductive age, and attain reproductive age sooner. I argue that most animal behavior is not the result of clever learning algorithms, but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck,” which serves as a regularizer. The genomic bottleneck suggests a path toward architectures capable of rapid learning.