Timezone: »

 
Poster
Multi-Class $H$-Consistency Bounds
Pranjal Awasthi · Anqi Mao · Mehryar Mohri · Yutao Zhong

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #318
We present an extensive study of $H$-consistency bounds for multi-class classification. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set $H$, expressed in terms of the surrogate loss estimation error of that predictor. They are stronger and more significant guarantees than Bayes-consistency, $H$-calibration or $H$-consistency, and more informative than excess error bounds derived for $H$ being the family of all measurable functions. We give a series of new $H$-consistency bounds for surrogate multi-class losses, including max losses, sum losses, and constrained losses, both in the non-adversarial and adversarial cases, and for different differentiable or convex auxiliary functions used. We also prove that no non-trivial $H$-consistency bound can be given in some cases. To our knowledge, these are the first $H$-consistency bounds proven for the multi-class setting. Our proof techniques are also novel and likely to be useful in the analysis of other such guarantees.

Author Information

Pranjal Awasthi (Google)
Anqi Mao (Courant Institute of Mathematical Sciences)
Mehryar Mohri (Google Research & Courant Institute of Mathematical Sciences)
Yutao Zhong (Courant Institute of Mathematical Sciences, NYU)

More from the Same Authors