Poster
Finite-Sample Maximum Likelihood Estimation of Location
Shivam Gupta · Jasper Lee · Eric Price · Paul Valiant
Hall J (level 1) #822
Abstract:
We consider 1-dimensional location estimation, where we estimate a parameter λλ from n samples λ+ηi, with each ηi drawn i.i.d. from a known distribution f. For fixed f the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as n→∞: it is asymptotically normal with variance matching the Cramer-Rao lower bound of 1nI, where I is the Fisher information of f. However, this bound does not hold for finite n, or when f varies with n. We show for arbitrary f and n that one can recover a similar theory based on the Fisher information of a smoothed version of f, where the smoothing radius decays with n.
Chat is not available.