Skip to yearly menu bar Skip to main content


Poster
in
Workshop: MATH-AI: Toward Human-Level Mathematical Reasoning

Out-of-Distribution Generalization in Algorithmic Reasoning Through Curriculum Learning

Andrew Nam · Mustafa Abdool · Trevor Maxfield · James McClelland


Abstract:

Out-of-distribution generalization (OODG) is a longstanding challenge for neural networks, and is quite apparent in tasks with well-defined variables and rules, where explicit use of the rules can solve problems independently of the particular values of the variables. Large transformer-based language models have pushed the boundaries on how well neural networks can generalize to novel inputs, but their complexity obfuscates they achieve such robustness. As a step toward understanding how transformer-based systems generalize, we explore the question of OODG in smaller scale transformers. Using a reasoning task based on the puzzle Sudoku, we show that OODG can occur on complex problems if the training set includes examples sampled from the whole distribution of simpler component tasks.

Chat is not available.