Skip to yearly menu bar Skip to main content


Poster

Porcupine Neural Networks: Approximating Neural Network Landscapes

Soheil Feizi · Hamid Javadi · Jesse Zhang · David Tse

Room 210 #21

Keywords: [ Spaces of Functions and Kernels ] [ Non-Convex Optimization ]


Abstract:

Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes analyzing their performance challenging. In this paper, we take another approach to this problem by constraining the network such that the corresponding optimization landscape has good theoretical properties without significantly compromising performance. In particular, for two-layer neural networks we introduce Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN.

Live content is unavailable. Log in and register to view live content