Skip to yearly menu bar Skip to main content


Precise Regret Bounds for Log-loss via a Truncated Bayesian Algorithm

Changlong Wu · Mohsen Heidari · Ananth Grama · Wojciech Szpankowski

Hall J (level 1) #318

Keywords: [ Logarithmic Loss ] [ Sequential probability assignment ] [ Shtarkov Sum ] [ Online Regression ] [ Bayesian Algorithm ]


We study sequential general online regression, known also as sequential probability assignments, under logarithmic loss when compared against a broad class of experts. We obtain tight, often matching, lower and upper bounds for sequential minimax regret, which is defined as the excess loss incurred by the predictor over the best expert in the class. After proving a general upper bound we consider some specific classes of experts from Lipschitz class to bounded Hessian class and derive matching lower and upper bounds with provably optimal constants. Our bounds work for a wide range of values of the data dimension and the number of rounds. To derive lower bounds, we use tools from information theory (e.g., Shtarkov sum) and for upper bounds, we resort to new "smooth truncated covering" of the class of experts. This allows us to find constructive proofs by applying a simple and novel truncated Bayesian algorithm. Our proofs are substantially simpler than the existing ones and yet provide tighter (and often optimal) bounds.

Chat is not available.