Skip to yearly menu bar Skip to main content


Poster

Learning the Number of Neurons in Deep Networks

Jose M. Alvarez · Mathieu Salzmann

Area 5+6+7+8 #110

Keywords: [ Large Scale Learning and Big Data ] [ Deep Learning or Neural Networks ]


Abstract:

Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80\% while retaining or even improving the network accuracy.

Live content is unavailable. Log in and register to view live content