Skip to yearly menu bar Skip to main content


Poster

Near-Optimal Smoothing of Structured Conditional Probability Matrices

Moein Falahatgar · Mesrob Ohannessian · Alon Orlitsky

Area 5+6+7+8 #172

Keywords: [ (Application) Natural Language and Text Processing ] [ Structured Prediction ] [ (Other) Machine Learning Topics ] [ Matrix Factorization ] [ Learning Theory ] [ (Other) Statistics ]


Abstract:

Utilizing the structure of a probabilistic model can significantly increase its learning speed. Motivated by several recent applications, in particular bigram models in language processing, we consider learning low-rank conditional probability matrices under expected KL-risk. This choice makes smoothing, that is the careful handling of low-probability elements, paramount. We derive an iterative algorithm that extends classical non-negative matrix factorization to naturally incorporate additive smoothing and prove that it converges to the stationary points of a penalized empirical risk. We then derive sample-complexity bounds for the global minimizer of the penalized risk and show that it is within a small factor of the optimal sample complexity. This framework generalizes to more sophisticated smoothing techniques, including absolute-discounting.

Live content is unavailable. Log in and register to view live content