Timezone: »

Oral
An Analysis of Convex Relaxations for MAP Estimation
Pawan K Mudigonda · Vladimir Kolmogorov · Philip Torr

Tue Dec 04 10:50 AM -- 11:10 AM (PST) @

The problem of obtaining the maximum a posteriori estimate of a given Markov random field is known to be NP-hard in general. However, due to its central importance in many applications, several approximate algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LP-S: the linear programming (LP) relaxation proposed by Schlesinger for a special case and independently by Chekuri et al., Koster et al. and Wainwright et al. for the general case; (ii) QP-RL: the quadratic programming (QP) relaxation by Ravikumar and Lafferty; and (iii) SOCP-MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki for two label problems and later extended by Kumar et al. for a general label set. Specifically, we show that the SOCP-MS and QP-RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints and the objective function offered by QP and SOCP, the LP-S relaxation dominates (i.e. provides a better approximation than) QP-RL and SOCP-MS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which are dominated by the LP-S relaxation. Based on these results we propose some novel SOCP constraints which dominate the previous approaches. We develop efficient algorithms for solving the resulting relaxations and show that the empirical results conform with our analysis.