Timezone: »
Optimization using gradient descent (GD) is a ubiquitous practice in various machine learning problems including training large neural networks. Noise-free GD and stochastic GD--corrupted by random noise--have been extensively studied in the literature, but less attention has been paid to an adversarial setting, that is subject to adversarial corruptions in the gradient values. In this work, we analyze the performance of GD under a proposed general adversarial framework. For the class of functions satisfying the Polyak-Łojasiewicz condition, we derive finite time bounds on a minimax optimization error. Based on this bound, we provide a guideline on the choice of learning rate sequence with theoretical guarantees on the robustness of GD against adversarial corruption.
Author Information
Fu-Chieh Chang (National Taiwan University)
Farhang Nabiei (University of Cambridge)
Pei-Yuan Wu (National Taiwan University)
Alexandru Cioba (Mediatek Research)
Sattar Vakili (MediaTek Research)
Alberto Bernacchia (MediaTek Research)
More from the Same Authors
-
2021 : How to distribute data across tasks for meta-learning? »
Alexandru Cioba · Michael Bromberg · Qian Wang · RITWIK NIYOGI · Georgios Batzolis · Jezabel Garcia · Da-shan Shiu · Alberto Bernacchia -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Near-Optimal Collaborative Learning in Bandits »
Clémence Réda · Sattar Vakili · Emilie Kaufmann -
2021 : Cyclic orthogonal convolutions for long-range integration of features »
Federica Freddi · Jezabel Garcia · Michael Bromberg · Sepehr Jalali · Da-shan Shiu · Alvin Chua · Alberto Bernacchia -
2021 Poster: Natural continual learning: success is a journey, not (just) a destination »
Ta-Chu Kao · Kristopher Jensen · Gido van de Ven · Alberto Bernacchia · Guillaume Hennequin -
2021 Poster: A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance »
Sudeep Salgia · Sattar Vakili · Qing Zhao -
2021 Poster: Optimal Order Simple Regret for Gaussian Process Bandits »
Sattar Vakili · Nacime Bouziani · Sepehr Jalali · Alberto Bernacchia · Da-shan Shiu -
2021 Poster: Scalable Thompson Sampling using Sparse Gaussian Process Models »
Sattar Vakili · Henry Moss · Artem Artemev · Vincent Dutordoir · Victor Picheny -
2020 Poster: Non-reversible Gaussian processes for identifying latent dynamical structure in neural data »
Virginia Rutten · Alberto Bernacchia · Maneesh Sahani · Guillaume Hennequin -
2020 Oral: Non-reversible Gaussian processes for identifying latent dynamical structure in neural data »
Virginia Rutten · Alberto Bernacchia · Maneesh Sahani · Guillaume Hennequin -
2018 Poster: Exact natural gradient in deep linear networks and its application to the nonlinear case »
Alberto Bernacchia · Mate Lengyel · Guillaume Hennequin