Poster
Does Preprocessing Help Training Over-parameterized Neural Networks?
Zhao Song · Shuo Yang · Ruizhe Zhang
Keywords: [ Machine Learning ] [ Optimization ] [ Deep Learning ]
Abstract:
Deep neural networks have achieved impressive performance in many areas. Designing a fast and provable method for training neural networks is a fundamental question in machine learning. The classical training method requires paying cost for both forward computation and backward computation, where is the width of the neural network, and we are given training points in -dimensional space. In this paper, we propose two novel preprocessing ideas to bypass this barrier:* First, by preprocessing the initial weights of the neural networks, we can train the neural network in cost per iteration.* Second, by preprocessing the input data points, we can train neural network in cost per iteration.From the technical perspective, our result is a sophisticated combination of tools in different fields, greedy-type convergence analysis in optimization, sparsity observation in practical work, high-dimensional geometric search in data structure, concentration and anti-concentration in probability. Our results also provide theoretical insights for a large number of previously established fast training methods.In addition, our classical algorithm can be generalized to the Quantum computation model. Interestingly, we can get a similar sublinear cost per iteration but avoid preprocessing initial weights or input data points.
Chat is not available.