Probabilistic Soundness Guarantees in LLM Reasoning Chains
Weiqiu You · Anton Xue · Shreya Havaldar · Delip Rao · Helen Jin · Chris Callison-Burch · Eric Wong
Abstract
In reasoning chains generated by large language models (LLMs), initial errors often propagate and undermine the reliability of the final conclusion. Current LLM-based error detection methods often fail to detect propagated errors because earlier errors can corrupt judgments of downstream reasoning. To better detect such errors, we introduce Autoregressive Reasoning Entailment Stability (ARES), a probabilistic framework that evaluates each reasoning step based solely on previously-verified premises. We find that ARES can reliably detect propagated reasoning errors that other baselines fail to find with probabilistic guarantees.
Chat is not available.
Successful Page Load