Timezone: »
The problem of estimating an unknown discrete distribution from its samples is a fundamental tenet of statistical learning. Over the past decade, it attracted significant research effort and has been solved for a variety of divergence measures. Surprisingly, an equally important problem, estimating an unknown Markov chain from its samples, is still far from understood. We consider two problems related to the minmax risk (expected loss) of estimating an unknown kstate Markov chain from its n sequential samples: predicting the conditional distribution of the next sample with respect to the KLdivergence, and estimating the transition matrix with respect to a natural loss induced by KL or a more general fdivergence measure.
For the first measure, we determine the minmax prediction risk to within a linear factor in the alphabet size, showing it is \Omega(k\log\log n/n) and O(k^2\log\log n/n). For the second, if the transition probabilities can be arbitrarily small, then only trivial uniform risk upper bounds can be derived. We therefore consider transition probabilities that are bounded away from zero, and resolve the problem for essentially all sufficiently smooth fdivergences, including KL, L_2, Chisquared, Hellinger, and Alphadivergences.
Author Information
Yi Hao (University of California, San Diego)
Fifthyear Ph.D. student supervised by Prof. Alon Orlitsky at UC San Diego. Broadly interested in Machine Learning, Learning Theory, Algorithm Design, Symbolic and Numerical Optimization. Seeking a summer 2020 internship in Data Science and Machine Learning.
Alon Orlitsky (University of California, San Diego)
Venkatadheeraj Pichapati (UC San Diego)
More from the Same Authors

2020 Poster: LinearSample Learning of LowRank Distributions »
Ayush Jain · Alon Orlitsky 
2020 Poster: A General Method for Robust Learning from Batches »
Ayush Jain · Alon Orlitsky 
2020 Poster: Optimal Prediction of the Number of Unseen Species with Multiplicity »
Yi Hao · Ping Li 
2020 Spotlight: Optimal Prediction of the Number of Unseen Species with Multiplicity »
Ping Li · Yi Hao 
2020 Poster: Profile Entropy: A Fundamental Measure for the Learnability and Compressibility of Distributions »
Yi Hao · Alon Orlitsky 
2020 Poster: SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm »
Yi Hao · Ayush Jain · Alon Orlitsky · Vaishakh Ravindrakumar 
2019 Poster: Unified SampleOptimal Property Estimation in NearLinear Time »
Yi Hao · Alon Orlitsky 
2019 Poster: The Broad Optimality of Profile Maximum Likelihood »
Yi Hao · Alon Orlitsky 
2019 Spotlight: The Broad Optimality of Profile Maximum Likelihood »
Yi Hao · Alon Orlitsky 
2018 Poster: Data Amplification: A Unified and Competitive Approach to Property Estimation »
Yi Hao · Alon Orlitsky · Ananda Theertha Suresh · Yihong Wu 
2017 Poster: The power of absolute discounting: alldimensional distribution estimation »
Moein Falahatgar · Mesrob Ohannessian · Alon Orlitsky · Venkatadheeraj Pichapati 
2017 Poster: Maxing and Ranking with Few Assumptions »
Moein Falahatgar · Yi Hao · Alon Orlitsky · Venkatadheeraj Pichapati · Vaishakh Ravindrakumar 
2016 Poster: NearOptimal Smoothing of Structured Conditional Probability Matrices »
Moein Falahatgar · Mesrob Ohannessian · Alon Orlitsky 
2015 Poster: Competitive Distribution Estimation: Why is GoodTuring Good »
Alon Orlitsky · Ananda Theertha Suresh 
2015 Oral: Competitive Distribution Estimation: Why is GoodTuring Good »
Alon Orlitsky · Ananda Theertha Suresh 
2014 Poster: NearOptimalSample Estimators for Spherical Gaussian Mixtures »
Ananda Theertha Suresh · Alon Orlitsky · Jayadev Acharya · Ashkan Jafarpour 
2012 Poster: Tight Bounds on Redundancy and Distinguishability of LabelInvariant Distributions »
Jayadev Acharya · Hirakendu Das · Alon Orlitsky