Timezone: »
In this paper, we consider a non-convex loss-minimization problem of learning Supervised PageRank models, which can account for features of nodes and edges. We propose gradient-based and random gradient-free methods to solve this problem. Our algorithms are based on the concept of an inexact oracle and unlike the state-of-the-art gradient-based method we manage to provide theoretically the convergence rate guarantees for both of them. Finally, we compare the performance of the proposed optimization methods with the state of the art applied to a ranking task.
Author Information
Lev Bogolubsky (Yandex, Moscow State University)
Pavel Dvurechenskii (Weierstrass Institute for Appl)
Since 2015 Research assistant, Research Group 6 "Stochastic Algorithms and Nonparametric Statistics", Weierstrass Institute for Applied Analysis and Stochastics, Berlin 2014 - 2015 Research assistant, Institute for Information Transmission Problems, Moscow, Russia 2009 - 2015 Junior researcher, Moscow Institute of Physics and Technology, Moscow, Russia 2013 Ph.D., Moscow Institute of Physics and Technology, Moscow, Russia 2010 Master's Diploma, Moscow Institute of Physics and Technology, Moscow, Russia 2008 Bachelor's Diploma, Moscow Institute of Physics and Technology, Moscow, Russia
Alexander Gasnikov (Moscow Institute of Physics and Technology)
Gleb Gusev (Yandex LLC)
Yurii Nesterov (Catholic University of Louvain (UCL))
Yurii Nesterov is a professor at Center for Operations Research and Econometrics (CORE) in Catholic University of Louvain (UCL), Belgium. He received Ph.D. degree (Applied Mathematics) in 1984 at Institute of Control Sciences, Moscow. Starting from 1993 he works at CORE. His research interests are related to complexity issues and efficient methods for solving various optimization problems. The main results are obtained in Convex Optimization (optimal methods for smooth problems, polynomial-time interior-point methods, smoothing technique for structural optimization, cubic regularization of Newton method, optimization methods for huge-scale problems). He is an author of 4 monographs and more than 80 refereed papers in the leading optimization journals. He got the Dantzig Prize from SIAM and Mathematical Programming society in 2000, von Neumann Theory Prize from INFORMS in 2009, and SIAM Outstanding paper award in 2014.
Andrei M Raigorodskii (Moscow Institute of Physics and Technology)
Aleksey Tikhonov (Yandex Technologies GmbH)
Maksim Zhukovskii
More from the Same Authors
-
2021 : Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes »
Abdurakhmon Sadiev · Ekaterina Borodich · Darina Dvinskikh · Aleksandr Beznosikov · Alexander Gasnikov -
2021 : Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes »
Abdurakhmon Sadiev · Ekaterina Borodich · Darina Dvinskikh · Aleksandr Beznosikov · Alexander Gasnikov -
2021 : Decentralized Personalized Federated Min-Max Problems »
Ekaterina Borodich · Aleksandr Beznosikov · Abdurakhmon Sadiev · Vadim Sushko · Alexander Gasnikov -
2022 : Effects of momentum scaling for SGD »
Dmitry A. Pasechnyuk · Alexander Gasnikov · Martin Takac -
2023 Poster: Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance »
Nikita Kornilov · Ohad Shamir · Aleksandr Lobanov · Alexander Gasnikov · Innokentiy Shibaev · Eduard Gorbunov · Darina Dvinskikh · Samuel Horváth -
2023 Poster: Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities »
Aleksandr Beznosikov · Martin Takac · Alexander Gasnikov -
2023 Poster: First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities »
Aleksandr Beznosikov · Sergey Samsonov · Marina Sheshukova · Alexander Gasnikov · Alexey Naumov · Eric Moulines -
2022 Spotlight: Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling »
Dmitry Kovalev · Alexander Gasnikov · Peter Richtarik -
2022 Spotlight: The First Optimal Acceleration of High-Order Methods in Smooth Convex Optimization »
Dmitry Kovalev · Alexander Gasnikov -
2022 Spotlight: Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees »
Aleksandr Beznosikov · Peter Richtarik · Michael Diskin · Max Ryabinin · Alexander Gasnikov -
2022 Spotlight: Optimal Algorithms for Decentralized Stochastic Variational Inequalities »
Dmitry Kovalev · Aleksandr Beznosikov · Abdurakhmon Sadiev · Michael Persiianov · Peter Richtarik · Alexander Gasnikov -
2022 Spotlight: Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity »
Dmitry Kovalev · Aleksandr Beznosikov · Ekaterina Borodich · Alexander Gasnikov · Gesualdo Scutari -
2022 Spotlight: The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization »
Dmitry Kovalev · Alexander Gasnikov -
2022 Spotlight: Decentralized Local Stochastic Extra-Gradient for Variational Inequalities »
Aleksandr Beznosikov · Pavel Dvurechenskii · Anastasiia Koloskova · Valentin Samokhin · Sebastian Stich · Alexander Gasnikov -
2022 Poster: Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity »
Dmitry Kovalev · Aleksandr Beznosikov · Ekaterina Borodich · Alexander Gasnikov · Gesualdo Scutari -
2022 Poster: Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise »
Eduard Gorbunov · Marina Danilova · David Dobre · Pavel Dvurechenskii · Alexander Gasnikov · Gauthier Gidel -
2022 Poster: The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization »
Dmitry Kovalev · Alexander Gasnikov -
2022 Poster: A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate »
Slavomír Hanzely · Dmitry Kamzolov · Dmitry Pasechnyuk · Alexander Gasnikov · Peter Richtarik · Martin Takac -
2022 Poster: The First Optimal Acceleration of High-Order Methods in Smooth Convex Optimization »
Dmitry Kovalev · Alexander Gasnikov -
2022 Poster: Optimal Algorithms for Decentralized Stochastic Variational Inequalities »
Dmitry Kovalev · Aleksandr Beznosikov · Abdurakhmon Sadiev · Michael Persiianov · Peter Richtarik · Alexander Gasnikov -
2022 Poster: Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling »
Dmitry Kovalev · Alexander Gasnikov · Peter Richtarik -
2022 Poster: Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees »
Aleksandr Beznosikov · Peter Richtarik · Michael Diskin · Max Ryabinin · Alexander Gasnikov -
2022 Poster: Decentralized Local Stochastic Extra-Gradient for Variational Inequalities »
Aleksandr Beznosikov · Pavel Dvurechenskii · Anastasiia Koloskova · Valentin Samokhin · Sebastian Stich · Alexander Gasnikov -
2021 Poster: Good Classification Measures and How to Find Them »
Martijn Gösgens · Anton Zhiyanov · Aleksey Tikhonov · Liudmila Prokhorenkova -
2021 Poster: Distributed Saddle-Point Problems Under Data Similarity »
Aleksandr Beznosikov · Gesualdo Scutari · Alexander Rogozin · Alexander Gasnikov -
2021 Poster: Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks »
Dmitry Kovalev · Elnur Gasanov · Alexander Gasnikov · Peter Richtarik -
2020 Poster: Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping »
Eduard Gorbunov · Marina Danilova · Alexander Gasnikov -
2020 Poster: Convex optimization based on global lower second-order models »
Nikita Doikov · Yurii Nesterov -
2020 Oral: Convex optimization based on global lower second-order models »
Nikita Doikov · Yurii Nesterov -
2018 : Poster spotlight »
Tianbao Yang · Pavel Dvurechenskii · Panayotis Mertikopoulos · Hugo Berard -
2018 Poster: Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters »
Pavel Dvurechenskii · Darina Dvinskikh · Alexander Gasnikov · Cesar Uribe · Angelia Nedich -
2018 Spotlight: Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters »
Pavel Dvurechenskii · Darina Dvinskikh · Alexander Gasnikov · Cesar Uribe · Angelia Nedich -
2016 Poster: Efficient High-Order Interaction-Aware Feature Selection Based on Conditional Mutual Information »
Alexander Shishkin · Anastasia Bezzubtseva · Alexey Drutsa · Ilia Shishkov · Ekaterina Gladkikh · Gleb Gusev · Pavel Serdyukov -
2014 Invited Talk: Subgradient Methods for Huge-Scale Optimization Problems »
Yurii Nesterov