In this tutorial, we will provide modern perspectives on abstraction and reasoning in AI systems. Traditionally, symbolic and probabilistic methods have dominated the domains of concept formation, abstraction, and automated reasoning. More recently, deep learning-based approaches have led to breakthroughs in some domains, like tackling hard search problems such as games and combinatorial search tasks. However, the resulting systems are still limited in scope and capabilities, especially in producing interpretable results and verifiable abstractions. Here, we will address a set of questions: Why is an ability for conceptual abstraction essential for intelligence, in both humans and machines? How can we get machines to learn flexible and extendable concepts that can transfer between domains? What do we understand by "strong reasoning capabilities" and how do we measure these capabilities in AI systems? How do deep learning-based methods change the landscape of computer-assisted reasoning? What are the failure modes of such methods and possible solutions to these issues?
Schedule 7:00pm - 7:40pm UTC Speaker: Francois Chollet Title: Why abstraction is the key, and what we're still missing
7:40pm - 7:50pm UTC Questions
7:50pm - 8:30pm UTC Speaker: Melanie Mitchell Title: Mechanisms of abstraction and analogy in natural and artificial intelligence
8:30pm - 8:40pm UTC Questions
8:40pm - 9:20pm UTC Speaker: Christian Szegedy Title: Deep learning for mathematical reasoning
9:20pm - 9:30pm UTC Questions