Skip to yearly menu bar Skip to main content


Talk
in
Tutorial: Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models?

Why do we Need Structure and Where does it Come From?

Irina Higgins


Abstract:

Ever since the 1950s AI scientists have been experimenting with the prospect of using computer technology to emulate human intelligence. While universal function approximation theorems promised success in this pursuit provided the kinds of tasks human intelligence was solving could be formulated as continuous function approximation problems, and provided enough scale was available to train MLPs of arbitrary width or depth, we find ourselves in the age of billion parameter models, and yet still far away from being able to replicate all aspects of human intelligence. Also, our models are not MLPs, but convolutional, recurrent, or otherwise structured neural networks. In this talk we will discuss why that is, and consider the general principles that can guide us towards building a new generation of neural networks with the kinds of structure that can solve the full spectrum of tasks that human intelligence can solve.