Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Infinite-channel deep convolutional Stable neural networks

Daniele Bracale · Stefano Favaro · Sandra Fortini · Stefano Peluchetti


Abstract:

The connection between infinite-width neural networks (NNs) and Gaussian processes (GPs) is well known since the seminal work of Neal (1996). While numerous theoretical refinements have been proposed in recent years, the connection between NNs and GPs relies on two critical distributional assumptions on the NN's parameters: i) finite variance ii) independent and identical distribution (iid). In this paper, we consider the problem of removing assumption i) in the context of deep feed-forward convolutional NNs. We show that the infinite-channel limit of a deep feed-forward convolutional NNs, under suitable scaling, is a stochastic process with multivariate stable finite-dimensional distributions, and we give an explicit recursion over the layers for their parameters. Our contribution extends recent results of Favaro et al (2021) to convolutional architectures, and it paves the way to exciting lines of research that rely on GP limits.

Chat is not available.