Stop Anthropomorphizing Intermediate Tokens as Reasoning/Thinking Traces!
Subbarao Kambhampati · Kaya Stechly · Karthik Valmeekam · Lucas Saldyt · Siddhant Bhambri · Vardhan Palod · Atharva Gundawar · Soumya Rani Samineni · Durgesh Kalwar · Upasana Biswas
Abstract
Intermediate token generation (ITG), where a model produces output before the solution, has been proposed as a method to improve the performance of language models on reasoning tasks. These intermediate tokens have been called "reasoning traces" or even "thoughts" -- implicitly anthropomorphizing the model, implying these tokens resemble steps a human might take when solving a challenging problem. In this position paper, we present evidence that this anthropomorphization isn't a harmless metaphor, and instead is quite dangerous -- it confuses the nature of these models and how to use them effectively, and leads to questionable research.
Chat is not available.
Successful Page Load