Quasi-Newton Methods for Federated Learning with Error Feedback
Yanlin Wu · Dmitry Kamzolov · Martin Takac
Abstract
In this paper, we propose a new class of Quasi-Newton methods for federated learning by integrating them with the error feedback framework—specifically focusing on the EF21 mechanism, which offers stronger theoretical guarantees and improved practical performance compared to earlier approaches. EF21 overcomes several limitations of prior methods, such as dependence on strong assumptions and high communication overhead.Quasi-Newton methods, particularly the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, are renowned for their empirical efficiency. By leveraging this, our proposed EF21+L-BFGS algorithm achieves an $\mathcal{O}\left(\tfrac{1}{T}\right)$ convergence rate in the nonconvex setting and enjoys linear convergence under the Polyak–Łojasiewicz (PL) condition. Through both theoretical analysis and empirical evaluations, we demonstrate the effectiveness of our approach, showing faster convergence and improved model performance compared to existing methods.
Chat is not available.
Successful Page Load