Skip to yearly menu bar Skip to main content


Tutorial

Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models?

Irina Higgins · Antonia Creswell · Sébastien Racanière

Moderator s: Mark van der Wilk · Søren Hauberg


Abstract:

The last few years have seen the emergence of billion parameter models trained on 'infinite' data that achieve impressive performance on many tasks, suggesting that big data and big models may be all we need. But how far can this approach take us, in particular on domains where data is more limited? In many situations adding structured architectural priors to models may be key to achieving faster learning, better generalisation and learning from less data. Structure can be added at the level of perception and at the level of reasoning - the goal of GOFAI research. In this tutorial we will use the idea of symmetries and symbolic reasoning as an overarching theoretical framework to describe many of the common structural priors that have been successful in the past for building more data efficient and generalisable perceptual models, and models that support better reasoning in neuro-symbolic approaches.

Chat is not available.
Schedule