Poster

On the Importance of Gradient Norm in PAC-Bayesian Bounds

Itai Gat · Yossi Adi · Alex Schwing · Tamir Hazan

Hall J #931

Keywords: [ PAC-Bayes ] [ Generalization Bound ]

[ Abstract ]
[ Paper [ OpenReview
Tue 29 Nov 9 a.m. PST — 11 a.m. PST

Abstract:

Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively. However, to obtain bounds, current techniques use strict assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these assumptions, in this paper, we follow an alternative approach: we relax uniform bounds assumptions by using on-average bounded loss and on-average bounded gradient norm assumptions. Following this relaxation, we propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities. These inequalities add an additional loss-gradient norm term to the generalization bound, which is intuitively a surrogate of the model complexity. We apply the proposed bound on Bayesian deep nets and empirically analyze the effect of this new loss-gradient norm term on different neural architectures.

Chat is not available.