Skip to yearly menu bar Skip to main content


Poster

Enhancing Large Language Models via Additional Pre-Training on Principled Synthetic Logic Corpus

Terufumi Morishita · Gaku Morio · Atsuki Yamaguchi · Yasuhiro Sogawa

East Exhibit Hall A-C #2807
[ ] [ Project Page ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Large language models (LLMs) are capable of solving a wide range of tasks, yet they have struggled with reasoning.To address this, we propose $\textbf{A}$dditional $\textbf{L}$ogic $\textbf{P}$re-$\textbf{T}$raining (ALPT), which aims to enhance LLMs' logical reasoning ability by training on program-generated synthetic reasoning samples.We first discuss how to design high-quality samples and then construct a synthetic corpus named PureLogicDiverse.Empirical results demonstrate that ALPT on PureLogicDiverse significantly enhances the overall capability of state-of-the-art LLMs, including LLaMA3-70B, with gains of up to 6 points on benchmarks such BBH.Task-wise analyses reveal substantial improvements in logical reasoning abilities, with gains of up to 15 points on relevant benchmarks.Furthermore, performance gains of up to 5 points in NLI tasks demonstrate the successful integration of knowledge acquired during pre-training with logical reasoning abilities newly acquired through ALPT.

Live content is unavailable. Log in and register to view live content