Skip to yearly menu bar Skip to main content


Poster

Variance estimation in compound decision theory under boundedness

Subhodh Kotekal

West Ballroom A-D #6510
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: The normal means model is often studied under the assumption of a known variance. However, ignorance of the variance is a frequent issue in applications and basic theoretical questions still remain open in this setting. This article establishes that the sharp minimax rate of variance estimation in square error is $(\frac{\log\log n}{\log n})^2$ under arguably the most mild assumption imposed for identifiability: bounded means. The rate-optimal estimator proposed in this article achieves the optimal rate by estimating $O\left(\frac{\log n}{\log\log n}\right)$ cumulants and leveraging a variational representation of the noise variance in terms of the cumulants of the data distribution. The minimax lower bound involves a moment matching construction.

Live content is unavailable. Log in and register to view live content