Timezone: »
Please join us in gather.town (see link above). To see the abstracts of the posters presented in this session, please see below the schedule.
Authors/papers presenting posters in gather.town for this session:
Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes, Hamed Jalali
DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning, Robert Hönig
Using a one dimensional parabolic model of the full-batch loss to estimate learning rates during training, Maximus Mutschler
COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic Convex Optimization, Manuel Madeira
Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes, Abdurakhmon Sadiev
Shifted Compression Framework: Generalizations and Improvements, Egor Shulgin
Faking Interpolation Until You Make It, Alasdair J Paren
Towards Modeling and Resolving Singular Parameter Spaces using Stratifolds, Pascal M Esser
Spherical Perspective on Learning with Normalization Layers, Simon W Roburin
Adaptive Optimization with Examplewise Gradients, Julius Kunze
On the Relation between Distributionally Robust Optimization and Data Curation, Agnieszka Słowik
Fast, Exact Subsampled Natural Gradients and First-Order KFAC, Frederik Benzing
Understanding Memorization from the Perspective of Optimization via Efficient Influence Estimation, Futong Liu
Community-based Layerwise Distributed Training of Graph Convolutional Networks, Hongyi Li
A New Scheme for Boosting with an Avarage Margin Distribution Oracle, Ryotaro Mitsuboshi
Better Linear Rates for SGD with Data Shuffling, Grigory Malinovsky
Structured Low-Rank Tensor Learning, Jayadev Naram
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method, Zhize Li
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback, Igor Sokolov
On Server-Side Stepsizes in Federated Optimization: Theory Explaining the Heuristics, Grigory Malinovsky
Author Information
Hamed Jalali (University of Tuebingen)
Robert Hönig (ETH Zürich)
Maximus Mutschler (University of Tübingen)
Manuel Madeira (Instituto Superior Técnico)
Abdurakhmon Sadiev (Moscow Institute of Physics and Technology)
Egor Shulgin (Samsung AI Cambridge, King Abdullah University of Science and Technology)
I am a PhD student in Computer Science at King Abdullah University of Science and Technology (KAUST) advised by [Peter Richtárik](https://richtarik.org/). Prior to that, I obtained BSc in Applied Mathematics, Computer Science, and Physics from Moscow Institute of Physics and Technology in 2019.
Alasdair Paren (University of Oxford)
Pascal Esser (Technical University of Munich)
Simon Roburin (ENPC; valeo.ai)
Julius Kunze (University College London)
Agnieszka Słowik (Department of Computer Science and Technology University of Cambridge)
Frederik Benzing (ETH Zurich)
Futong Liu (EPFL)
Hongyi Li (Xidian University)
Ryotaro Mitsuboshi (Kyushu University)
Grigory Malinovsky (King Abdullah University of Science and Technology)
Jayadev Naram (International Institute of Information Technology, Hyderabad)
Zhize Li (King Abdullah University of Science and Technology (KAUST))
Zhize Li is a Research Scientist at the King Abdullah University of Science and Technology (KAUST) since September 2020. He obtained his PhD degree in Computer Science from Tsinghua University in 2019 (Advisor: Prof. Jian Li). He was a postdoc at KAUST (Hosted by Prof. Peter Richtárik), a visiting scholar at Duke University (Hosted by Prof. Rong Ge), and a visiting scholar at Georgia Institute of Technology (Hosted by Prof. Guanghui (George) Lan).
Igor Sokolov (KAUST)
Sharan Vaswani (University of Alberta)
More from the Same Authors
-
2021 : A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines »
Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji. Kasneci -
2021 : Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes »
Abdurakhmon Sadiev · Ekaterina Borodich · Darina Dvinskikh · Aleksandr Beznosikov · Alexander Gasnikov -
2021 : Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes »
Abdurakhmon Sadiev · Ekaterina Borodich · Darina Dvinskikh · Aleksandr Beznosikov · Alexander Gasnikov -
2021 : Towards Modeling and Resolving Singular Parameter Spaces using Stratifolds »
Pascal Esser · Frank Nielsen -
2021 : Towards Modeling and Resolving Singular Parameter Spaces using Stratifolds »
Pascal Esser · Frank Nielsen -
2021 : Spherical Perspective on Learning with Normalization Layers »
Simon Roburin · Yann de Mont-Marin · Andrei Bursuc · Renaud Marlet · Patrick Pérez · Mathieu Aubry -
2021 : Spherical Perspective on Learning with Normalization Layers »
Simon Roburin · Yann de Mont-Marin · Andrei Bursuc · Renaud Marlet · Patrick Pérez · Mathieu Aubry -
2021 : Better Linear Rates for SGD with Data Shuffling »
Grigory Malinovsky · Alibek Sailanbayev · Peter Richtarik -
2021 : Better Linear Rates for SGD with Data Shuffling »
Grigory Malinovsky · Alibek Sailanbayev · Peter Richtarik -
2021 : Fast, Exact Subsampled Natural Gradients and First-Order KFAC »
Frederik Benzing -
2021 : Fast, Exact Subsampled Natural Gradients and First-Order KFAC »
Frederik Benzing -
2021 : DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization »
Boyue Li · Zhize Li · Yuejie Chi -
2021 : DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization »
Boyue Li · Zhize Li · Yuejie Chi -
2021 : Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes »
Hamed Jalali · Gjergji. Kasneci -
2021 : DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning »
Robert Hönig · Yiren Zhao · Robert Mullins -
2021 : Using a one dimensional parabolic model of the full-batch loss to estimate learning rates during training »
Maximus Mutschler · Andreas Zell -
2021 : Community-based Layerwise Distributed Training of Graph Convolutional Networks »
Hongyi Li · Junxiang Wang · Yongchao Wang · Yue Cheng · Liang Zhao -
2021 : COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic Convex Optimization »
Manuel Madeira · Renato Negrinho · Joao Xavier · Pedro Aguiar -
2021 : Shifted Compression Framework: Generalizations and Improvements »
Egor Shulgin · Peter Richtarik -
2021 : A New Scheme for Boosting with an Average Margin Distribution Oracle »
Ryotaro Mitsuboshi · Kohei Hatano · Eiji Takimoto -
2021 : Faking Interpolation Until You Make It »
Alasdair Paren · Rudra Poudel · Pawan K Mudigonda -
2021 : Adaptive Optimization with Examplewise Gradients »
Julius Kunze · James Townsend · David Barber -
2021 : Structured Low-Rank Tensor Learning »
Jayadev Naram · Tanmay Sinha · Pawan Kumar -
2021 : ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method »
Zhize Li -
2021 : EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback »
Peter Richtarik · Igor Sokolov · Ilyas Fatkhullin · Eduard Gorbunov · Zhize Li -
2021 : Towards Noise-adaptive, Problem-adaptive Stochastic Gradient Descent »
Sharan Vaswani · Benjamin Dubois-Taine · Reza Babanezhad Harikandeh -
2021 : On Server-Side Stepsizes in Federated Optimization: Theory Explaining the Heuristics »
Grigory Malinovsky · Konstantin Mishchenko · Peter Richtarik -
2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 : Understanding Memorization from the Perspective of Optimization via Efficient Influence Estimation »
Futong Liu · Tao Lin · Martin Jaggi -
2021 : Understanding Memorization from the Perspective of Optimization via Efficient Influence Estimation »
Futong Liu · Tao Lin · Martin Jaggi -
2021 : Decentralized Personalized Federated Min-Max Problems »
Ekaterina Borodich · Aleksandr Beznosikov · Abdurakhmon Sadiev · Vadim Sushko · Alexander Gasnikov -
2021 : Poster: Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 : [S4] A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines »
Vadim Borisov · Johannes Meier · Johan Van den Heuvel · Hamed Jalali · Gjergji. Kasneci -
2021 : Contributed talks in Session 4 (Zoom) »
Quanquan Gu · Agnieszka Słowik · Jacques Chen · Neha Wadia · Difan Zou -
2021 : Contributed talks in Session 3 (Zoom) »
Oliver Hinder · Wenhao Zhan · Akhilesh Soni · Grigory Malinovsky · Boyue Li -
2021 : Contributed Talks in Session 2 (Zoom) »
Courtney Paquette · Chris Junchi Li · Jeffery Kline · Junhyung Lyle Kim · Pascal Esser -
2021 : Contributed Talks in Session 1 (Zoom) »
Sebastian Stich · Futong Liu · Abdurakhmon Sadiev · Frederik Benzing · Simon Roburin -
2021 : Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 Poster: EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Igor Sokolov · Ilyas Fatkhullin -
2021 Poster: Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks »
Pascal Esser · Leena Chennuru Vankadara · Debarghya Ghoshdastidar -
2021 Poster: CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression »
Zhize Li · Peter Richtarik -
2021 Oral: EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Igor Sokolov · Ilyas Fatkhullin -
2020 : Poster Session 2 (gather.town) »
Sharan Vaswani · Nicolas Loizou · Wenjie Li · Preetum Nakkiran · Zhan Gao · Sina Baghal · Jingfeng Wu · Roozbeh Yousefzadeh · Jinyi Wang · Jing Wang · Cong Xie · Anastasia Borovykh · Stanislaw Jastrzebski · Soham Dan · Yiliang Zhang · Mark Tuddenham · Sarath Pattathil · Ievgen Redko · Jeremy Cohen · Yasaman Esfandiari · Zhanhong Jiang · Mostafa ElAraby · Chulhee Yun · Michael Psenka · Robert Gower · Xiaoyu Wang -
2020 : Contributed talks in Session 2 (Zoom) »
Martin Takac · Samuel Horváth · Guan-Horng Liu · Nicolas Loizou · Sharan Vaswani -
2020 : Contributed Video: Adaptive Gradient Methods Converge Faster with Over-Parameterization (and you can do a line-search), Sharan Vaswani »
Sharan Vaswani -
2020 : Contributed Video: How to make your optimizer generalize better, Sharan Vaswani »
Sharan Vaswani -
2020 : Poster Session 1 (gather.town) »
Laurent Condat · Tiffany Vlaar · Ohad Shamir · Mohammadi Zaki · Zhize Li · Guan-Horng Liu · Samuel Horváth · Mher Safaryan · Yoni Choukroun · Kumar Shridhar · Nabil Kahale · Jikai Jin · Pratik Kumar Jawanpuria · Gaurav Kumar Yadav · Kazuki Koyama · Junyoung Kim · Xiao Li · Saugata Purkayastha · Adil Salim · Dighanchal Banerjee · Peter Richtarik · Lakshman Mahto · Tian Ye · Bamdev Mishra · Huikang Liu · Jiajie Zhu -
2020 : Contributed talks in Session 1 (Zoom) »
Sebastian Stich · Laurent Condat · Zhize Li · Ohad Shamir · Tiffany Vlaar · Mohammadi Zaki -
2020 : Contributed Video: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization, Zhize Li »
Zhize Li -
2020 Poster: Parabolic Approximation Line Search for DNNs »
Maximus Mutschler · Andreas Zell -
2020 Poster: Near-Optimal Comparison Based Clustering »
Michaël Perrot · Pascal Esser · Debarghya Ghoshdastidar -
2019 : Poster Session »
Eduard Gorbunov · Alexandre d'Aspremont · Lingxiao Wang · Liwei Wang · Boris Ginsburg · Alessio Quaglino · Camille Castera · Saurabh Adya · Diego Granziol · Rudrajit Das · Raghu Bollapragada · Fabian Pedregosa · Martin Takac · Majid Jahani · Sai Praneeth Karimireddy · Hilal Asi · Balint Daroczy · Leonard Adolphs · Aditya Rawal · Nicolas Brandt · Minhan Li · Giuseppe Ughi · Orlando Romero · Ivan Skorokhodov · Damien Scieur · Kiwook Bae · Konstantin Mishchenko · Rohan Anil · Vatsal Sharan · Aditya Balu · Chao Chen · Zhewei Yao · Tolga Ergen · Paul Grigas · Chris Junchi Li · Jimmy Ba · Stephen J Roberts · Sharan Vaswani · Armin Eftekhari · Chhavi Sharma -
2019 Poster: A unified variance-reduced accelerated gradient method for convex optimization »
Guanghui Lan · Zhize Li · Yi Zhou -
2019 Poster: SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points »
Zhize Li -
2019 Poster: Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates »
Sharan Vaswani · Aaron Mishkin · Issam Laradji · Mark Schmidt · Gauthier Gidel · Simon Lacoste-Julien -
2018 Poster: Modular Networks: Learning to Decompose Neural Computation »
Louis Kirsch · Julius Kunze · David Barber