Poster
Submodular Maximization via Gradient Ascent: The Case of Deep Submodular Functions
Wenruo Bai · William Stafford Noble · Jeffrey A Bilmes
Room 210 #36
Keywords: [ Combinatorial Optimization ] [ Convex Optimization ] [ Submodular Optimization ]
[
Abstract
]
Abstract:
We study the problem of maximizing deep submodular functions (DSFs) subject to a matroid constraint. DSFs are an expressive class of submodular functions that include, as strict subfamilies, the facility location, weighted coverage, and sums of concave composed with modular functions. We use a strategy similar to the continuous greedy approach, but we show that the multilinear extension of any DSF has a natural and computationally attainable concave relaxation that we can optimize using gradient ascent. Our results show a guarantee of max0<δ<1(1−ϵ−δ−e−δ2Ω(k)) with a running time of O(\nicefracn2ϵ2) plus time for pipage rounding
to recover a discrete solution, where k is the rank of the matroid constraint. This bound is often better than the standard 1−1/e guarantee of the continuous greedy algorithm, but runs much faster. Our bound also holds even for fully curved (c=1) functions where the guarantee of 1−c/e degenerates to 1−1/e where c is the curvature of f. We perform computational experiments that support our theoretical results.
Live content is unavailable. Log in and register to view live content