Poster
Generalization Bounds for Uniformly Stable Algorithms
Vitaly Feldman · Jan Vondrak
Room 210 #85
Keywords: [ Learning Theory ]
[
Abstract
]
Abstract:
Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in , the generalization error of -uniformly stable learning algorithm on samples is known to be at most with probability at least . Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where . At the same time the bound is known to be tight only when .
Here we prove substantially stronger generalization bounds for uniformly stable algorithms without any additional assumptions. First, we show that the generalization error in this setting is at most with probability at least . In addition, we prove a tight bound of on the second moment of the generalization error. The best previous bound on the second moment of the generalization error is . Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.
Live content is unavailable. Log in and register to view live content