Timezone: »

 
Workshop
Retrospectives: A Venue for Self-Reflection in ML Research
Ryan Lowe · Yoshua Bengio · Joelle Pineau · Michela Paganini · Jessica Forde · Shagun Sodhani · Abhishek Gupta · Joel Lehman · Peter Henderson · Kanika Madan · Koustuv Sinha · Xavier Bouthillier

Fri Dec 13 08:00 AM -- 06:00 PM (PST) @ West 114 + 115
Event URL: https://ml-retrospectives.github.io/neurips2019/ »

The NeurIPS Workshop on Retrospectives in Machine Learning will kick-start the exploration of a new kind of scientific publication, called retrospectives. The purpose of a retrospective is to answer the question:

“What should readers of this paper know now, that is not in the original publication?”

Retrospectives provide a venue for authors to reflect on their previous publications, to talk about how their intuitions have changed, to identify shortcomings in their analysis or results, and to discuss resulting extensions that may not be sufficient for a full follow-up paper. A retrospective is written about a single paper, by that paper's author, and takes the form of an informal paper. The overarching goal of retrospectives is to improve the science, openness, and accessibility of the machine learning field, by widening what is publishable and helping to identifying opportunities for improvement. Retrospectives will also give researchers and practitioners who are unable to attend top conferences access to the author’s updated understanding of their work, which would otherwise only be accessible to their immediate circle.

Fri 9:00 a.m. - 9:10 a.m. [iCal]
Opening Remarks (Opening remarks)
Fri 9:10 a.m. - 9:30 a.m. [iCal]
Invited talk: Leon Bottou (Talk)
Fri 9:30 a.m. - 9:50 a.m. [iCal]

In our 1995 paper “The Copycat Project: A Model of Mental Fluidity and Analogy-Making”, Douglas Hofstadter and I described Copycat, a computer program that makes analogies in an idealized domain of letter strings. The goal of the project was to model the general-purpose ability of humans to fluidly perceive abstract similarities between situations. Copycat's active symbol architecture, inspired by human perception, was a unique combination of symbolic and subsymbolic components. Now, 25 years later, AI is refocusing on abstraction and analogy as core aspects of robust intelligence, and the ideas underlying Copycat have new relevance. In this talk I will reflect on these ideas, on the limitations of Copycat and its idealized domain, and on possible novel contributions of this decades-old work to current open problems in AI.

Fri 9:50 a.m. - 10:10 a.m. [iCal]

Supervised learning algorithms are increasingly operationalized in real-world decision-making systems. Unfortunately, the nature and desiderata of real-world tasks rarely fit neatly into the supervised learning contract. Real data deviates from the training distribution, training targets are often weak surrogates for real-world desiderata, error is seldom the right utility function, and while the framework ignores interventions, predictions typically drive decisions. While the deep questions concerning the ethics of AI necessarily address the processes that generate our data and the impacts that automated decisions will have, neither ML tools, nor proposed ML-based solutions tackle these problems head on. This talk explores the consequences and limitations of employing ML-based technology in the real world, the limitations of recent solutions (so-called fair and interpretable algorithms) for mitigating societal harms, and contemplates the meta-question: when should (today's) ML systems be off the table altogether?

Fri 10:10 a.m. - 10:25 a.m. [iCal]
Coffee break + poster set-up (Break)
Fri 10:25 a.m. - 10:35 a.m. [iCal]
Contributed talk: Juergen Schmidhuber, "Unsupervised minimax" (Talk)
Fri 10:35 a.m. - 10:45 a.m. [iCal]
Contributed talk: Prabhu Pradhan, "Smarter prototyping for neural learning" (Talk)
Fri 10:45 a.m. - 10:55 a.m. [iCal]
Contributed talk: Andre Pacheco, "Recent advances in deep learning applied for skin cancer detection" (Talk)
Fri 10:55 a.m. - 11:15 a.m. [iCal]
Invited talk: Veronika Cheplygina, "How I Fail in Writing Papers" (Talk)
Fri 11:15 a.m. - 12:15 p.m. [iCal]

Some of the questions that will be discussed: (1) how can we encourage researchers to share their real thoughts and feelings about their work? and (2) how can we improve the dissemination of 'soft knowledge' in the field?

Fri 12:15 p.m. - 1:45 p.m. [iCal]
Lunch break (Break)
Fri 1:45 p.m. - 2:05 p.m. [iCal]
Invited talk: Emily Denton (Talk)
Fri 2:05 p.m. - 2:25 p.m. [iCal]
Invited talk: Percy Liang (Talk)
Fri 2:25 p.m. - 3:00 p.m. [iCal]

Lightning talks: An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution (Rosanne Liu) Learning the structure of deep sparse graphical models (Zoubin Ghahramani) Lessons Learned from The Lottery Ticket Hypothesis (Jonathan Frankle) FiLM: Visual Reasoning with a General Conditioning Layer (Ethan Perez) DLPaper2Code: Auto-Generation of Code from Deep Learning Research Papers (Anush Sankaran) Conditional computation in neural networks for faster models (Emmanuel Bengio)

Fri 3:00 p.m. - 4:00 p.m. [iCal]
Posters + Coffee Break (Posters)
Fri 4:00 p.m. - 4:20 p.m. [iCal]
Invited talk: David Duvenaud, "Reflecting on Neural ODEs" (Talk)
Fri 4:20 p.m. - 4:40 p.m. [iCal]
Invited talk: Michael Littman, "Reflecting on 'Markov games that people play'" (Talk)
Fri 4:40 p.m. - 5:40 p.m. [iCal]
Retrospectives brainstorming session: how do we produce impact? (Structured group brainstorming)

Author Information

Ryan Lowe (McGill University / OpenAI)
Yoshua Bengio (Mila)

Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.

Joelle Pineau (McGill University)

Joelle Pineau is an Associate Professor and William Dawson Scholar at McGill University where she co-directs the Reasoning and Learning Lab. She also leads the Facebook AI Research lab in Montreal, Canada. She holds a BASc in Engineering from the University of Waterloo, and an MSc and PhD in Robotics from Carnegie Mellon University. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a recipient of NSERC's E.W.R. Steacie Memorial Fellowship (2018), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Michela Paganini (Facebook AI Research)
Jessica Forde (Brown University)
Shagun Sodhani (MILA, University of Montreal)
Abhishek Gupta (Microsoft)
Joel Lehman (Uber AI)
Peter Henderson (McGill University)
Kanika Madan (University of Toronto)
Koustuv Sinha (McGill University / Mila / FAIR)

PhD student at McGill University / Mila, advised by Dr Joelle Pineau & William L Hamilton. Research Assistant at Facebook AI Research (FAIR), Montreal. I primarily work on logical language understanding, systematic generalization, logical graphs and dialog systems.

Xavier Bouthillier (Université de Montréal)

More from the Same Authors