Skip to yearly menu bar Skip to main content


Talk
in
Tutorial: Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models?

Balancing Structure In NeuroSymbolic Methods

Antonia Creswell


Abstract:

Deep learning has led to some incredible successes in a very broad range of applications within AI. However, deep learning models remain black boxes, often unable to explain how they reach their final answers with no clear signals as to “what went wrong?” when models fail. Further, they typically require huge amounts of data during training and often do not generalize well beyond the data they have been trained on. But AI has not always been this way. In the “Good-Old days”, GOFAI did not require any data at all and the final solutions were interpretable, but the AI’s were not grounded in the real world. Further, unlike deep learning where a single general algorithm could be used to learn to solve many different problems, a single GOFAI algorithm can only be applied to a single task. So can we have our cake and eat it too? Is there a solution to AI out there that requires a limited amount of data, is interpretable, generalises well to new problems and can be applied to a wide variety of tasks? One interesting, developing area of AI that could answer this question is NeuroSymbolic AI which combines deep learning and logical reasoning in a single model. In this tutorial we will explore these models in the context of “structure” identifying how varying degrees of structure in a model affects its interpretability, how well it generalises to new data as the generality of the algorithm and the variety of tasks it can be applied to.