Timezone: »
Decentralized learning offers privacy and communication efficiency when data are naturally distributed among agents communicating over an underlying graph. Motivated by overparameterized learning settings, in which models are trained to zero training loss, we study algorithmic and generalization properties of decentralized learning with gradient descent on separable data. Specifically, for decentralized gradient descent (DGD) and a variety of loss functions that asymptote to zero at infinity (including exponential and logistic losses), we derive novel finite-time generalization bounds. This complements a long line of recent work that studies the generalization performance and the implicit bias of gradient descent over separable data, but has thus far been limited to centralized learning scenarios. Notably, our generalization bounds match in order their centralized counterparts. Critical behind this, and of independent interest, is establishing novel bounds on the training loss and the rate-of-consensus of DGD for a class of self-bounded losses. Finally, we conduct numerical experiments which corroborate our theoretical results.
Author Information
Hossein Taheri (UCSB)
Christos Thrampoulidis (University of British Columbia)
More from the Same Authors
-
2022 : On the Implicit Geometry of Cross-Entropy Parameterizations for Label-Imbalanced Data »
Tina Behnia · Ganesh Ramachandra Kini · Vala Vakilian · Christos Thrampoulidis -
2022 : Fast Convergence of Random Reshuffling under Interpolation and the Polyak-Łojasiewicz Condition »
Chen Fan · Christos Thrampoulidis · Mark Schmidt -
2023 Poster: BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization »
Chen Fan · Gaspard Choné-Ducasse · Mark Schmidt · Christos Thrampoulidis -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 Poster: Imbalance Trouble: Revisiting Neural-Collapse Geometry »
Christos Thrampoulidis · Ganesh Ramachandra Kini · Vala Vakilian · Tina Behnia -
2022 Poster: Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently »
Haoyuan Sun · Kwangjun Ahn · Christos Thrampoulidis · Navid Azizan -
2021 Poster: AutoBalance: Optimized Loss Functions for Imbalanced Data »
Mingchen Li · Xuechen Zhang · Christos Thrampoulidis · Jiasi Chen · Samet Oymak -
2021 Poster: UCB-based Algorithms for Multinomial Logistic Regression Bandits »
Sanae Amani · Christos Thrampoulidis -
2021 Poster: Label-Imbalanced and Group-Sensitive Classification under Overparameterization »
Ganesh Ramachandra Kini · Orestis Paraskevas · Samet Oymak · Christos Thrampoulidis -
2021 Poster: Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation »
Ke Wang · Vidya Muthukumar · Christos Thrampoulidis -
2020 Poster: Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View »
Christos Thrampoulidis · Samet Oymak · Mahdi Soltanolkotabi -
2020 Poster: Stage-wise Conservative Linear Bandits »
Ahmadreza Moradipari · Christos Thrampoulidis · Mahnoosh Alizadeh -
2019 Poster: Robust and Communication-Efficient Collaborative Learning »
Amirhossein Reisizadeh · Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani