Foundations of Reasoning in Language Models
Abstract
Our workshop’s goal is to advance foundational understanding, principled innovations, and rigorous scientific evaluations for reasoning in language models. These advancements are built upon theoretical analyses and controlled empirical studies that illuminate how reasoning emerges, where it fails, and how it can be systematically improved.
We want to foster dialogue between communities with complementary strengths---those building theoretical models of reasoning phenomena, those designing experiments that reveal its emergence or failure in practice, and those proposing algorithmic developments that advance reasoning---around three primary questions:
1. How are language models able to solve complex tasks, and what do they still struggle with?
2. What fundamental challenges stand in the way of advancing reasoning capabilities?
3. What algorithmic innovations can overcome these obstacles?