Results 1  10
of
40
Accelerated dual decomposition for MAP inference
 In ICML
, 2010
"... Approximate MAP inference in graphical models is an important and challenging problem for many domains including computer vision, computational biology and natural language understanding. Current stateoftheart approaches employ convex relaxations of these problems as surrogate objectives, but ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
(Show Context)
Approximate MAP inference in graphical models is an important and challenging problem for many domains including computer vision, computational biology and natural language understanding. Current stateoftheart approaches employ convex relaxations of these problems as surrogate objectives, but only provide weak running time guarantees. In this paper, we develop an approximate inference algorithm that is both efficient and has strong theoretical guarantees. Specifically, our algorithm is guaranteed to converge to an accurate solution of the convex relaxation in O
Approximate Inference in Graphical Models using LP Relaxations
, 2010
"... Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the vari ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the variables. We present new approaches to approximate inference based on linear programming (LP) relaxations. Our algorithms optimize over the cycle relaxation of the marginal polytope, which we show to be closely related to the first lifting of the SheraliAdams hierarchy, and is significantly tighter than the pairwise LP relaxation. We show how to efficiently optimize over the cycle relaxation using a cuttingplane algorithm that iteratively introduces constraints into the relaxation. We provide a criterion to determine which constraints would be most helpful in tightening the relaxation, and give efficient algorithms for solving the search problem of finding the best cycle constraint to add according to this criterion.
Energy Minimization for Linear Envelope MRFs
"... Markov random fields with higher order potentials have emerged as a powerful model for several problems in computer vision. In order to facilitate their use, we propose a new representation for higher order potentials as upper and lower envelopes of linear functions. Our representation concisely mod ..."
Abstract

Cited by 25 (7 self)
 Add to MetaCart
(Show Context)
Markov random fields with higher order potentials have emerged as a powerful model for several problems in computer vision. In order to facilitate their use, we propose a new representation for higher order potentials as upper and lower envelopes of linear functions. Our representation concisely models several commonly used higher order potentials, thereby providing a unified framework for minimizing the corresponding Gibbs energy functions. We exploit this framework by converting lower envelope potentials to standard pairwise functions with the addition of a small number of auxiliary variables. This allows us to minimize energy functions with lower envelope potentials using conventional algorithms such as BP, TRW and αexpansion. Furthermore, we show how the minimization of energy functions with upper envelope potentials leads to a difficult minmax problem. We address this difficulty by proposing a new message passing algorithm that solves a linear programming relaxation of the problem. Although this is primarily a theoretical paper, we demonstrate the efficacy of our approach on the binary (fg/bg) segmentation problem. 1.
1 Learning Graphical Model Parameters with Approximate Marginal Inference
"... Abstract—Likelihood basedlearning of graphical models faces challenges of computationalcomplexity and robustness to model misspecification. This paper studies methods that fit parameters directly to maximize a measure of the accuracy of predicted marginals, taking into account both model and infer ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Abstract—Likelihood basedlearning of graphical models faces challenges of computationalcomplexity and robustness to model misspecification. This paper studies methods that fit parameters directly to maximize a measure of the accuracy of predicted marginals, taking into account both model and inference approximations at training time. Experiments on imaging problems suggest marginalizationbased learning performs better than likelihoodbased approximations on difficult problems where the model being fit is approximate in nature. 1
Efficient MRF energy minimization via adaptive diminishing smoothing
 In UAI
, 2012
"... We consider the linear programming relaxation of an energy minimization problem for Markov Random Fields. The dual objective of this problem can be treated as a concave and unconstrained, but nonsmooth function. The idea of smoothing the objective prior to optimization was recently proposed in a se ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
(Show Context)
We consider the linear programming relaxation of an energy minimization problem for Markov Random Fields. The dual objective of this problem can be treated as a concave and unconstrained, but nonsmooth function. The idea of smoothing the objective prior to optimization was recently proposed in a series of papers. Some of them suggested the idea to decrease the amount of smoothing (so called temperature) while getting closer to the optimum. However, no theoretical substantiation was provided. We propose an adaptive smoothing diminishing algorithm based on the duality gap between relaxed primal and dual objectives and demonstrate the efficiency of our approach with a smoothed version of Sequential TreeReweighted Message Passing (TRWS) algorithm. The strategy is applicable to other algorithms as well, avoids adhoc tuning of the smoothing during iterations, and provably guarantees convergence to the optimum. 1
Parameter learning with truncated messagepassing
 In CVPR
, 2011
"... Training of conditional random fields often takes the form of a doubleloop procedure with messagepassing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many messagepassing iterations. This paper seeks to reduce the expense ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Training of conditional random fields often takes the form of a doubleloop procedure with messagepassing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many messagepassing iterations. This paper seeks to reduce the expense of such training, by redefining the training objective in terms of the approximate marginals obtained after messagepassing is “truncated ” to a fixed number of iterations. An algorithm is derived to efficiently compute the exact gradient of this objective. On a common pixel labeling benchmark, this procedure improves training speeds by an order of magnitude, and slightly improves inference accuracy if a very small number of messagepassing iterations are used at test time. 1.
Variational algorithms for marginal map
 In UAI
, 2011
"... Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a convergent alternative using CCCP to solve the BPtype approximations. Theoretically, we give conditions under which the decoded solution is a global or local optimum, and obtain novel upper bounds on solutions. Experimentally we demonstrate that our algorithms outperform related approaches. We also show that EM and variational EM comprise a special case of our framework. 1
Globally Convergent Dual MAP LP Relaxation Solvers using FenchelYoung Margins
"... While finding the exact solution for the MAP inference problem is intractable for many realworld tasks, MAP LP relaxations have been shown to be very effective in practice. However, the most efficient methods that perform block coordinate descent can get stuck in suboptimal points as they are not ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
While finding the exact solution for the MAP inference problem is intractable for many realworld tasks, MAP LP relaxations have been shown to be very effective in practice. However, the most efficient methods that perform block coordinate descent can get stuck in suboptimal points as they are not globally convergent. In this work we propose to augment these algorithms with an ɛdescent approach and present a method to efficiently optimize for a descent direction in the subdifferential using a marginbased formulation of the FenchelYoung duality theorem. Furthermore, the presented approach provides a methodology to construct a primal optimal solution from its dual optimal counterpart. We demonstrate the efficiency of the presented approach on spin glass models and protein interaction problems and show that our approach outperforms stateoftheart solvers. 1
Global interactions in random field models: A potential function ensuring connectedness
 SIAM J. Img. Sci
"... Abstract. Markov random field (MRF) models, including conditional random field models, are popular in computer vision. However, in order to be computationally tractable, they are limited to incorporating only local interactions and cannot model global properties such as connectedness, which is a pot ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Markov random field (MRF) models, including conditional random field models, are popular in computer vision. However, in order to be computationally tractable, they are limited to incorporating only local interactions and cannot model global properties such as connectedness, which is a potentially useful highlevel prior for object segmentation. In this work, we overcome this limitation by deriving a potential function that forces the output labeling to be connected and that can naturally be used in the framework of recent maximum a posteriori (MAP)MRF linear program (LP) relaxations. Using techniques from polyhedral combinatorics, we show that a provably strong approximation to the MAP solution of the resulting MRF can still be found efficiently by solving a sequence of maxflow problems. The efficiency of the inference procedure also allows us to learn the parameters of an MRF with global connectivity potentials by means of a cutting plane algorithm. We experimentally evaluate our algorithm on both synthetic data and on the challenging image segmentation task of the PASCAL Visual Object Classes 2008 data set. We show that in both cases the addition of a connectedness prior significantly reduces the segmentation error. Key words. Markov random fields, potential functions, large cliques, higharity interactions