Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

Learning to Generate Verbalized Confidences

Sophia Hager · Nicholas Andrews

Keywords: [ Calibration ] [ verbalized confidence ]

[ ] [ Project Page ]
Sat 14 Dec noon PST — 12:45 p.m. PST

Abstract:

In many use cases, it would be desirable for language models to be able to verbalize the likelihood that their responses are correct; for instance, if a user asks a factual question of a large language model (LLM), qualifying its answer with ``low confidence'' may prompt the user to check the veracity of the answer on their own. For these verbalized expressions of uncertainty to be meaningful, they should reflect the expected error rates at that level of confidence. However, current models are not able to consistently verbalize meaningful confidences when prompted to do so, often displaying overconfidence while making incorrect predictions. We explore a simple procedure to teach an LLM to verbalize calibrated confidences by using held-out data to map initial uncertainty estimates to meaningful probabilities and then creating examples annotated with verbalized probabilities for supervised fine-tuning. We report preliminary experiments on a question answering task with a smaller language model, which suggest our procedure yields verbalized confidences on held-out data that correlate with observed error rates. We compare different methods of encoding the verbalized confidences for fine-tuning and assess the impact on accuracy and calibration. Finally, we discuss extensions of the proposed approach and future work.

Live content is unavailable. Log in and register to view live content