Timezone: »

 
Poster
Safe Adaptive Importance Sampling
Sebastian U Stich · Anant Raj · Martin Jaggi

Wed Dec 06 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #172 #None

Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants -- using importance values defined by the complete gradient information which changes during optimization -- enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of gradient-based sampling, which is based on safe bounds on the gradient. The proposed sampling distribution is (i) provably the \emph{best sampling} with respect to the given bounds, (ii) always better than uniform sampling and fixed importance sampling and (iii) can efficiently be computed -- in many applications at negligible extra cost. The proposed sampling scheme is generic and can easily be integrated into existing algorithms. In particular, we show that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel scheme. The proven efficiency of the proposed sampling is verified by extensive numerical testing.

Author Information

Sebastian U Stich (EPFL)

Dr. [Sebastian U. Stich](https://sstich.ch/) is a postdoctoral researcher in machine learning at EPFL (Lausanne, Switzerland) and on the job market. Research interests: - *machine learning*—at the interface of theory and practice - *optimization*—analysis of zeroth-/first-/second-order serial/parallel/decentralized/federated + accelerated algorithms

Anant Raj (Max Planck Institute for Intelligent Systems)
Martin Jaggi (EPFL)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors