Timezone: »

Distributed Zero-Order Optimization under Adversarial Noise
Arya Akhavan · Massimiliano Pontil · Alexandre Tsybakov

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter, and smoothness properties of the local objectives. The bounds indicate some key improvements of our method over the state-of-the-art, both in the distributed and standard zero-order optimization settings.

Author Information

Arya Akhavan (ENSAE - IIT)
Massimiliano Pontil (IIT & UCL)
Alexandre Tsybakov (CREST, ENSAE, Institut Polytechnique de Paris)

More from the Same Authors