Skip to yearly menu bar Skip to main content


Poster

Self-Evolution Decoding for Improving Factuality in Large Language Models

Jianyi Zhang · Da-Cheng Juan · Cyrus Rashtchian · Chun-Sung Ferng · Heinrich Jiang · Yiran Chen

East Exhibit Hall A-C #3311
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

To enhance the reliability and truthfulness of large language models (LLMs), we introduce Self-Evolution Decoding (SED), a novel and elegant decoding strategy that does not rely on external knowledge bases or require additional fine-tuning. Our method, SED, enhances the quality of LLM outputs by optimizing an implicit objective function using the inherent self-evolution of hidden states of LLMs. This approach allows for an ongoing refinement of outputs during inference, akin to further training, thus providing improved accuracy and interpretability over conventional decoding methods. When evaluated on established benchmarks such as TruthfulQA, SED demonstrates a significant improvement—up to a 10\% increase in factual accuracy over traditional methods. These results indicate that SED not only increases the factual accuracy of LLM outputs but also does so without compromising the model's natural language fluency, positioning it as a suitable solution for critical applications requiring high accuracy and reliability.

Live content is unavailable. Log in and register to view live content