Skip to yearly menu bar Skip to main content


Poster
in
Workshop: I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models

A Study on Improving Reasoning in Language Models

Yuqing Du · Alexander Havrilla · Sainbayar Sukhbaatar · Pieter Abbeel · Roberta Raileanu


Abstract:

Accurately carrying out complex reasoning is a crucial component of deployable and reliable language models. While current language models can exhibit this capability with few-shot guidance, accurate reasoning is primarily restricted to larger model sizes. In this work, we explore methods for improving the reasoning capabilities of smaller language models which are more deployable than their larger counterparts. Specifically, we look at variations of supervised learning, online reinforcement learning with PPO, and distillation from larger models. Surprisingly, for reasoning tasks such as CommonsenseQA and GSM8K, we find that simple filtered supervised learning often outperforms reward-conditioned supervised learning, and that simpler iterative supervised learning performs on par with online reinforcement learning.

Chat is not available.