Skip to yearly menu bar Skip to main content


Spotlight Poster

Graph-based Uncertainty Metrics for Long-form Language Model Generations

Mingjian Jiang · Yangjun Ruan · Prasanna Sattigeri · Salim Roukos · Tatsunori Hashimoto


Abstract:

Recent advancements in Large Language Models (LLMs) have significantly improved text generation capabilities, but these systems are still known to hallucinate and granular uncertainty estimation for long-form LLM generations remains challenging. In this work, we propose Graph Uncertainty -- which represents the relationship between LLM generations and claims within them as a bipartite graph and estimates the claim-level uncertainty with a family of graph centrality metrics. Under this view, existing uncertainty estimation methods based on the concept of self-consistency can be viewed as using degree centrality as an uncertainty measure, and we show that more sophisticated alternatives such as closeness centrality provide consistent gains at claim-level uncertainty estimation. Moreover, we present uncertainty-aware decoding techniques that leverage both the graph structure and uncertainty estimates to improve the factuality of LLM generations by preserving only the most reliable claims. Compared to existing methods, our graph-based uncertainty metrics lead to an average of 5.7% relative gains across various long-form generation settings, and our end-to-end system provides consistent 2-4% gains in factuality over existing decoding techniques.

Live content is unavailable. Log in and register to view live content