Skip to yearly menu bar Skip to main content


Talk
in
Expo Workshop: Perspectives on Neurosymbolic Artificial Intelligence Research

Challenges for Compositional Generalization

Tim Klinger


Abstract:

Intuitively, compositional generalization is about combining things you know in new ways to solve a task, with little or no additional training. People can do this very well, even young children, but neural networks struggle. This is important because the cost to generalize without such a mechanism can be exponential in the number of variables in the task representation. Recent work evaluating neural networks for compositional generalization has mostly focused on natural language translation. In this presentation I'll review some of that work and talk about our experiments on composition in a completely geometric domain with no language but rather concepts specified in a first-order logical language which has richer constraints than those imposed by context free grammars. Our preliminary results indicate that neural nets do not generalize compositionally in this setting either.

Chat is not available.