Skip to yearly menu bar Skip to main content


Poster

Are Uncertainty Quantification Capabilities of Evidential Deep Learning a Mirage?

Maohao Shen · Jongha (Jon) Ryu · Soumya Ghosh · Yuheng Bu · Prasanna Sattigeri · Subhro Das · Gregory Wornell

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

This paper questions the effectiveness of a modern predictive uncertainty quantification approach, called evidential deep learning (EDL), in which a single neural network model is trained to learn a meta distribution over the predictive distribution by minimizing a specific objective function. Despite their perceived strong empirical performance on downstream tasks, a line of recent studies by Bengs et al. identify limitations of the existing methods to conclude their learned epistemic uncertainties are unreliable, e.g., in that they are non-vanishing even with infinite data. Building on and sharpening such analysis, we 1) provide a sharper understanding of the asymptotic behavior of a wide class of EDL methods by unifying various objective functions; 2) reveal that the EDL methods can be better interpreted as an out-of-distribution detection algorithm based on energy-based-models; and 3) conduct extensive ablation studies to better assess their empirical effectiveness with real-world datasets. Through all these analyses, we conclude that even when EDL methods are empirically effective on downstream tasks, this occurs despite their poor uncertainty quantification capabilities. Our investigation suggests that incorporating model uncertainty can help EDL methods faithfully quantify uncertainties and further improve performance on representative downstream tasks, albeit at the cost of additional computational complexity.

Live content is unavailable. Log in and register to view live content