Timezone: »
Poster
Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match
Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen
Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #169
In this paper, we develop \scg~(\text{SCG}{$++$}), the first efficient variant of a conditional gradient method for maximizing a continuous submodular function subject to a convex constraint. Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle. The best previously known algorithms either achieve a suboptimal $[(1/2)\OPT -\epsilon]$ solution with $O(1/\epsilon^2)$ stochastic gradients or the tight $[(1-1/e)\OPT -\epsilon]$ solution with suboptimal $O(1/\epsilon^3)$ stochastic gradients. We further provide an information-theoretic lower bound to showcase the necessity of $\OM({1}/{\epsilon^2})$ stochastic oracle queries in order to achieve $[(1-1/e)\OPT -\epsilon]$ for monotone and DR-submodular functions. This result shows that our proposed \SCGPP enjoys optimality in terms of both approximation guarantee, i.e., $(1-1/e)$ approximation factor, and stochastic gradient evaluations, i.e., $O(1/\epsilon^2)$ calls to the stochastic oracle. By using stochastic
continuous optimization as an interface, we also show that it is possible to obtain the $[(1-1/e)\OPT-\epsilon]$ tight approximation guarantee for maximizing a monotone
but stochastic submodular set function subject to a general matroid constraint after at most
$\mathcal{O}(n^2/\epsilon^2)$ calls to the stochastic function value, where $n$ is the number of elements in the ground set.
Author Information
Amin Karbasi (Yale)
Hamed Hassani (UPenn)
Aryan Mokhtari (UT Austin)
Zebang Shen (University of Pennsylvania)
More from the Same Authors
-
2022 : Conditional gradient-based method for bilevel optimization with convex lower-level problem »
Ruichen Jiang · Nazanin Abolfazli · Aryan Mokhtari · Erfan Yazdandoost Hamedani -
2022 : Statistical and Computational Complexities of BFGS Quasi-Newton Method for Generalized Linear Models »
Qiujiang Jin · Aryan Mokhtari · Nhat Ho · Tongzheng Ren -
2022 Poster: Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints »
Xinmeng Huang · Donghwan Lee · Edgar Dobriban · Hamed Hassani -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: FedAvg with Fine Tuning: Local Updates Lead to Representation Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2022 Poster: Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds »
Aritra Mitra · Arman Adibi · George J. Pappas · Hamed Hassani -
2021 Poster: Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach »
Qiujiang Jin · Aryan Mokhtari -
2021 Poster: Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks »
Alireza Fallah · Aryan Mokhtari · Asuman Ozdaglar -
2021 Poster: On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning »
Alireza Fallah · Kristian Georgiev · Aryan Mokhtari · Asuman Ozdaglar -
2020 Poster: Submodular Maximization Through Barrier Functions »
Ashwinkumar Badanidiyuru · Amin Karbasi · Ehsan Kazemi · Jan Vondrak -
2020 Poster: Continuous Submodular Maximization: Beyond DR-Submodularity »
Moran Feldman · Amin Karbasi -
2020 Poster: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Poster: Sinkhorn Barycenter via Functional Gradient Descent »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Spotlight: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Spotlight: Submodular Maximization Through Barrier Functions »
Ashwinkumar Badanidiyuru · Amin Karbasi · Ehsan Kazemi · Jan Vondrak -
2020 Session: Orals & Spotlights Track 32: Optimization »
Hamed Hassani · Jeffrey A Bilmes -
2020 Poster: Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition »
Lin Chen · Qian Yu · Hannah Lawrence · Amin Karbasi -
2020 Poster: Task-Robust Model-Agnostic Meta-Learning »
Liam Collins · Aryan Mokhtari · Sanjay Shakkottai -
2020 Poster: Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking »
Isidoros Tziotis · Constantine Caramanis · Aryan Mokhtari -
2020 Poster: Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach »
Alireza Fallah · Aryan Mokhtari · Asuman Ozdaglar -
2020 Poster: Online MAP Inference of Determinantal Point Processes »
Aditya Bhaskara · Amin Karbasi · Silvio Lattanzi · Morteza Zadimoghaddam -
2020 Poster: Submodular Meta-Learning »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2019 : Invited talk: Aryan Mokhtari (UT Austin) »
Aryan Mokhtari -
2019 Poster: Adaptive Sequence Submodularity »
Marko Mitrovic · Ehsan Kazemi · Moran Feldman · Andreas Krause · Amin Karbasi -
2019 Poster: Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback »
Mingrui Zhang · Lin Chen · Hamed Hassani · Amin Karbasi -
2019 Poster: Robust and Communication-Efficient Collaborative Learning »
Amirhossein Reisizadeh · Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2019 Poster: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2018 Poster: Direct Runge-Kutta Discretization Achieves Acceleration »
Jingzhao Zhang · Aryan Mokhtari · Suvrit Sra · Ali Jadbabaie -
2018 Spotlight: Direct Runge-Kutta Discretization Achieves Acceleration »
Jingzhao Zhang · Aryan Mokhtari · Suvrit Sra · Ali Jadbabaie -
2018 Poster: Do Less, Get More: Streaming Submodular Maximization with Subsampling »
Moran Feldman · Amin Karbasi · Ehsan Kazemi -
2018 Spotlight: Do Less, Get More: Streaming Submodular Maximization with Subsampling »
Moran Feldman · Amin Karbasi · Ehsan Kazemi -
2018 Poster: Escaping Saddle Points in Constrained Optimization »
Aryan Mokhtari · Asuman Ozdaglar · Ali Jadbabaie -
2018 Spotlight: Escaping Saddle Points in Constrained Optimization »
Aryan Mokhtari · Asuman Ozdaglar · Ali Jadbabaie -
2017 Workshop: Discrete Structures in Machine Learning »
Yaron Singer · Jeff A Bilmes · Andreas Krause · Stefanie Jegelka · Amin Karbasi -
2017 Poster: Interactive Submodular Bandit »
Lin Chen · Andreas Krause · Amin Karbasi -
2017 Poster: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Oral: Streaming Weak Submodularity: Interpreting Neural Networks on the Fly »
Ethan Elenberg · Alex Dimakis · Moran Feldman · Amin Karbasi -
2017 Poster: Gradient Methods for Submodular Maximization »
Hamed Hassani · Mahdi Soltanolkotabi · Amin Karbasi -
2017 Poster: Stochastic Submodular Maximization: The Case of Coverage Functions »
Mohammad Karimi · Mario Lucic · Hamed Hassani · Andreas Krause -
2016 Poster: Estimating the Size of a Large Network and its Communities from a Random Sample »
Lin Chen · Amin Karbasi · Forrest W. Crawford -
2016 Poster: Fast Distributed Submodular Cover: Public-Private Data Summarization »
Baharan Mirzasoleiman · Morteza Zadimoghaddam · Amin Karbasi -
2015 Poster: Distributed Submodular Cover: Succinctly Summarizing Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Ashwinkumar Badanidiyuru · Andreas Krause -
2015 Spotlight: Distributed Submodular Cover: Succinctly Summarizing Massive Data »
Baharan Mirzasoleiman · Amin Karbasi · Ashwinkumar Badanidiyuru · Andreas Krause