Timezone: »
Large language models have recently shown promising progress in mathematical reasoning when fine-tuned with human-generated sequences walking through a sequence of solution steps. However, the solution sequences are not formally structured and the resulting model-generated sequences may not reflect the kind of systematic reasoning we might expect an expert human to produce. In this paper, we study how to build stronger reasoning capability in language models using the idea of relational abstractions. We introduce new types of sequences that more explicitly provide an abstract characterization of the transitions through intermediate solution steps to the goal state. We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy, and models that are trained to produce such sequences solve problems better than those that are trained with previously used human-generated sequences and other baselines. Our work thus takes several steps toward elucidating and improving how language models perform on tasks requiring multi-step mathematical reasoning.
Author Information
Andrew Nam (Stanford University)
James McClelland (Stanford University)
Mengye Ren (NYU)
Chelsea Finn (Google)
More from the Same Authors
-
2021 : Noether Networks: Meta-Learning Useful Conserved Quantities »
Ferran Alet · Dylan Doblar · Allan Zhou · Josh Tenenbaum · Kenji Kawaguchi · Chelsea Finn -
2022 : Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time »
Caroline Choi · Huaxiu Yao · Yoonho Lee · Pang Wei Koh · Chelsea Finn -
2022 : Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks »
Yuxuan Li · James McClelland -
2022 : Neural Network Online Training with Sensitivity to Multiscale Temporal Structure »
Matt Jones · Tyler Scott · Gamaleldin Elsayed · Mengye Ren · Katherine Hermann · David Mayo · Michael Mozer -
2022 : Out-of-Distribution Generalization in Algorithmic Reasoning Through Curriculum Learning »
Andrew Nam · Mustafa Abdool · Trevor Maxfield · James McClelland -
2022 : Invited Talk: James McClelland »
James McClelland -
2022 Poster: Data Distributional Properties Drive Emergent In-Context Learning in Transformers »
Stephanie Chan · Adam Santoro · Andrew Lampinen · Jane Wang · Aaditya Singh · Pierre Richemond · James McClelland · Felix Hill -
2021 Workshop: Math AI for Education (MATHAI4ED): Bridging the Gap Between Research and Smart Education »
Pan Lu · Yuhuai Wu · Sean Welleck · Xiaodan Liang · Eric Xing · James McClelland