(Track2) Equivariant Networks Q&A
(Track1) There and Back Again: A Tale of Slopes and Expectations Q&A
Feedback Control Perspectives on Learning
The impact of feedback control is extensive. It is deployed in a wide array of engineering domains, including aerospace, robotics, automotive, communications, manufacturing, and energy applications, with super-human performance having been achieved for decades. Many settings in learning involve feedback interconnections, e.g., reinforcement learning has an agent in feedback with its environment, and multi-agent learning has agents in feedback with each other. By explicitly recognizing the presence of a feedback interconnection, one can exploit feedback control perspectives for the analysis and synthesis of such systems, as well as investigate trade-offs in fundamental limitations of achievable performance inherent in all feedback control systems. This talk highlights selected feedback control concepts—in particular robustness, passivity, tracking, and stabilization—as they relate to specific questions in evolutionary game theory, no-regret learning, and multi-agent learning.
Orals & Spotlights Track 07: Vision Applications
Orals & Spotlights Track 11: Learning Theory
Orals & Spotlights Track 06: Dynamical Sys/Density/Sparsity
Orals & Spotlights Track 05: Clustering/Ranking
Orals & Spotlights Track 10: Social/Privacy
Orals & Spotlights Track 09: Reinforcement Learning
Orals & Spotlights Track 08: Deep Learning
Queer in AI Workshop @ NeurIPS 2020
Queer in AI's third NeurIPS workshop comes amidst a global pandemic, uprisings against police brutality and economic injustice in countries around the world, and the specter of increasingly rapid climate breakdown. These challenges militate against business as usual in AI/ML. Our workshop asks its participants to think and work hard to bring a radical spirit to their technical and social work, and to fight for the rights and freedoms of queer people, and all people, around the world.
To access the NeurIPS events check-in survey, go to https://forms.gle/LNnhCHTXN49hBB8W7
Queer In AI’s mission is to make the AI/ML community one that welcomes, supports, and values queer scientists from around the world. We accomplish this aim by building a visible community of queer and ally AI/ML scientists through meetups, poster sessions, mentoring, and other initiatives. We also recognize the growing impact AI/ML has on societies around the globe, and the potential for these powerful learning and classification technologies to out and target queer people, in addition to other issues. A central part of Queer in AI's mission is raising awareness of these societal challenges in the general AI/ML community and encouraging and highlighting research on and solutions to these problems.
Muslims in ML
Muslims In ML (MusIML) is an affinity workshop for the NeurIPS community.
We focus on both the potential for advancement and harm to Muslims and those in Muslim-majority countries who religiously identify, culturally associate, or are classified by proximity, as “Muslim”.
The workshop will run on Tuesday, December 8, 2020 from 10:30AM - 1:30PM EST.
We will feature a combination of pre-recorded and live talks, followed by a panel discussion with authors on the intersection of policy, technology, and Muslim communities.
A Decemberfest on Trustworthy AI Research - An overview and panel discussion with virtual drinks and Bretzels
What do we mean by Trust in AI? Why does it matter? What influence can technology have in building trust? These and further questions will be addressed during the two-part gathering for interested attendees. The event will start with a block of elevator pitches on the topic of Trustworthy AI held by researchers of the German Network of National Centres of Excellence for AI Research in collaboration with international partners. This will then lead to the second part, a panel and audience discussion focused on central questions regarding Trustworthy AI. In addition to this semi-formal program a social gathering space with topical corners but also just hang-out space will give the opportunity for a social get-together.
COVID-19 Symposium Day 1
The COVID-19 global pandemic has disrupted nearly all aspects of modern life. This year NeurIPS will host a symposium on COVID-19 to frame challenges and opportunities for the machine learning community and to foster a frank discussion on the role of machine learning. A central focus of this symposium will be clearly outlining key areas where machine learning is and is not likely to make a substantive impact. The one-day event will feature talks from leading epidemiologists, biotech leaders, policy makers, and global health experts. Attendees of this symposium will gain a deeper understanding of the current state of the COVID-19 pandemic, challenges and limitations for current machine learning capabilities, how machine learning is accelerating COVID-19 vaccine development, and possible ways machine learning may aid in the present and future pandemics.
(Track3) Offline Reinforcement Learning: From Algorithm Design to Practical Applications Q&A
Data Privacy: Academia, Industry, Policy, and Society
The past decade has witnessed the widespread adoption of machine learning and statistical methods on large-scale datasets, many of which correspond to personal data of individuals. While this has enabled unprecedented insights into human behaviour, at the same time, it raises new moral and ethical concerns about what might be revealed as a byproduct of these analyses. What private information will this allow us to infer about individuals, and is this worth the price of admission? Are there strategies which we can adopt to avoid these disclosures, and can they be executed without significant loss in utility? At which point should lawmakers step in, and how do we connect technical notions of privacy with those which are enforced by law? Beyond individual privacy, should there be regulations on things that "shouldn't be learned"? The goal of this social is to bring together a broad range of experts and non-experts interested in all aspects of data privacy, discuss associated issues and challenges, and propose and debate potential solutions. We will lead participants through an exploration of these concepts, featuring fun and interactive activities, as well as a guided discussion on these topics.
(Track1) Sketching and Streaming Algorithms Q&A
Equity and Ethics in AI from the Perspective of Black Women in AI/STEM
AI is all around us but not created by all of us which connects to bias in AI. AI tends to be created by individuals that are not representative of several groups in society. Women of Color, especially Black women, are underrepresented in AI related fields of study and careers. This topic is also crucial to the ongoing discussions regarding how to address bias in AI. Equity and ethics in AI are also important topics that are connected to addressing bias in AI. One recommendation to address this bias is to actively provide opportunities to increase the representation of underrepresented groups. This social will identify methods and techniques that Black women who study AI and/or have pursued AI practitioner/developer/etc careers have explored or suggested to address equity and ethics in AI. Black Women who study/have studied/work in roles where AI is developed plan to lead discussions to address equity and ethics from their perspectives related to AI. Some of the discussion starters for this session that will be used are: how many Black women do audience members know in an AI/STEM program of study or career? have you spoken to any Black woman about her journey to an AI or STEM program of study or career? These questions would be used to frame other topics to discuss during the session. Notes will be taken by the team members connected to this social during discussions. These notes will be distributed by asking participants to provide their names and email addresses in a Google form. After the completion of this session follow-up meetings will be encouraged to continue with some of the suggested next steps from the discussions.
Open collaboration in ML research
How to Improve the AI research field to be more open, inviting, inclusive and fair? What can we achieve through open collaboration?
(Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems Q&A
Lapsed Physicists Wine-and-Cheese
"Lapsed" (aka. Former) Physicists are plentiful in the machine learning community. Inspired by Wine and Cheese seminars at many institutions, this BYOWC (Bring Your Own Wine and Cheese) event is an informal opportunity to connect with members of the community. Hear how others made the transition between fields. Discuss how your physics training prepared you to switch fields or what synergies between physics and machine learning excite you the most. Share your favorite physics jokes your computer science colleagues don't get, and just meet other cool people. Open to everyone, not only physicists; you'll just have to tolerate our humor. Wine and Cheese encouraged, but not required.
We will present cryptography inspired models and results to address three challenges that emerge when worst-case adversaries enter the machine learning landscape. These challenges include verification of machine learning models given limited access to good data, training at scale on private training data, and robustness against adversarial examples controlled by worst case adversaries.