Results 1  10
of
29
Accelerated dual decomposition for MAP inference
 In ICML
, 2010
"... Approximate MAP inference in graphical models is an important and challenging problem for many domains including computer vision, computational biology and natural language understanding. Current stateoftheart approaches employ convex relaxations of these problems as surrogate objectives, but ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
Approximate MAP inference in graphical models is an important and challenging problem for many domains including computer vision, computational biology and natural language understanding. Current stateoftheart approaches employ convex relaxations of these problems as surrogate objectives, but only provide weak running time guarantees. In this paper, we develop an approximate inference algorithm that is both efficient and has strong theoretical guarantees. Specifically, our algorithm is guaranteed to converge to an accurate solution of the convex relaxation in O
Energy Minimization for Linear Envelope MRFs
"... Markov random fields with higher order potentials have emerged as a powerful model for several problems in computer vision. In order to facilitate their use, we propose a new representation for higher order potentials as upper and lower envelopes of linear functions. Our representation concisely mod ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
(Show Context)
Markov random fields with higher order potentials have emerged as a powerful model for several problems in computer vision. In order to facilitate their use, we propose a new representation for higher order potentials as upper and lower envelopes of linear functions. Our representation concisely models several commonly used higher order potentials, thereby providing a unified framework for minimizing the corresponding Gibbs energy functions. We exploit this framework by converting lower envelope potentials to standard pairwise functions with the addition of a small number of auxiliary variables. This allows us to minimize energy functions with lower envelope potentials using conventional algorithms such as BP, TRW and αexpansion. Furthermore, we show how the minimization of energy functions with upper envelope potentials leads to a difficult minmax problem. We address this difficulty by proposing a new message passing algorithm that solves a linear programming relaxation of the problem. Although this is primarily a theoretical paper, we demonstrate the efficacy of our approach on the binary (fg/bg) segmentation problem. 1.
Approximate Inference in Graphical Models using LP Relaxations
, 2010
"... Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the vari ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the variables. We present new approaches to approximate inference based on linear programming (LP) relaxations. Our algorithms optimize over the cycle relaxation of the marginal polytope, which we show to be closely related to the first lifting of the SheraliAdams hierarchy, and is significantly tighter than the pairwise LP relaxation. We show how to efficiently optimize over the cycle relaxation using a cuttingplane algorithm that iteratively introduces constraints into the relaxation. We provide a criterion to determine which constraints would be most helpful in tightening the relaxation, and give efficient algorithms for solving the search problem of finding the best cycle constraint to add according to this criterion.
Parameter learning with truncated messagepassing
 In CVPR
, 2011
"... Training of conditional random fields often takes the form of a doubleloop procedure with messagepassing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many messagepassing iterations. This paper seeks to reduce the expense ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Training of conditional random fields often takes the form of a doubleloop procedure with messagepassing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many messagepassing iterations. This paper seeks to reduce the expense of such training, by redefining the training objective in terms of the approximate marginals obtained after messagepassing is “truncated ” to a fixed number of iterations. An algorithm is derived to efficiently compute the exact gradient of this objective. On a common pixel labeling benchmark, this procedure improves training speeds by an order of magnitude, and slightly improves inference accuracy if a very small number of messagepassing iterations are used at test time. 1.
Variational algorithms for marginal map
 In UAI
, 2011
"... Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a convergent alternative using CCCP to solve the BPtype approximations. Theoretically, we give conditions under which the decoded solution is a global or local optimum, and obtain novel upper bounds on solutions. Experimentally we demonstrate that our algorithms outperform related approaches. We also show that EM and variational EM comprise a special case of our framework. 1
Global interactions in random field models: A potential function ensuring connectedness
 SIAM J. Img. Sci
"... Abstract. Markov random field (MRF) models, including conditional random field models, are popular in computer vision. However, in order to be computationally tractable, they are limited to incorporating only local interactions and cannot model global properties such as connectedness, which is a pot ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Markov random field (MRF) models, including conditional random field models, are popular in computer vision. However, in order to be computationally tractable, they are limited to incorporating only local interactions and cannot model global properties such as connectedness, which is a potentially useful highlevel prior for object segmentation. In this work, we overcome this limitation by deriving a potential function that forces the output labeling to be connected and that can naturally be used in the framework of recent maximum a posteriori (MAP)MRF linear program (LP) relaxations. Using techniques from polyhedral combinatorics, we show that a provably strong approximation to the MAP solution of the resulting MRF can still be found efficiently by solving a sequence of maxflow problems. The efficiency of the inference procedure also allows us to learn the parameters of an MRF with global connectivity potentials by means of a cutting plane algorithm. We experimentally evaluate our algorithm on both synthetic data and on the challenging image segmentation task of the PASCAL Visual Object Classes 2008 data set. We show that in both cases the addition of a connectedness prior significantly reduces the segmentation error. Key words. Markov random fields, potential functions, large cliques, higharity interactions
Collective inference for extraction mrfs coupled with symmetric clique potentials
, 2010
"... Many structured information extraction tasks employ collective graphical models that capture interinstance associativity by coupling them with various clique potentials. We propose tractable families of such potentials that are invariant under permutations of their arguments, and call them symmetric ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Many structured information extraction tasks employ collective graphical models that capture interinstance associativity by coupling them with various clique potentials. We propose tractable families of such potentials that are invariant under permutations of their arguments, and call them symmetric clique potentials. We present three families of symmetric potentials—MAX, SUM, and MAJORITY. We propose cluster message passing for collective inference with symmetric clique potentials, and present message computation algorithms tailored to such potentials. Our first message computation algorithm, called αpass, is subquadratic in the clique size, outputs exact messages for MAX, and computes 13 15approximate messages for Potts, a popular member of the SUM family. Empirically, it is upto two orders of magnitude faster than existing algorithms based on graphcuts or belief propagation. Our second algorithm, based on Lagrangian relaxation, operates on MAJORITY potentials and provides close to exact solutions while being two orders of magnitude faster. We show that the cluster message passing framework is more principled, accurate and converges faster than competing approaches. We extend our collective inference framework to exploit associativity of more general intradomain properties of instance labelings, which opens up interesting applications in domain adaptation. Our approach leads to significant error reduction on unseen domains without incurring any overhead of model retraining. Keywords: passing graphical models, collective inference, clique potentials, cluster graphs, message 1.
Efficient MRF energy minimization via adaptive diminishing smoothing
 In UAI
, 2012
"... We consider the linear programming relaxation of an energy minimization problem for Markov Random Fields. The dual objective of this problem can be treated as a concave and unconstrained, but nonsmooth function. The idea of smoothing the objective prior to optimization was recently proposed in a se ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
We consider the linear programming relaxation of an energy minimization problem for Markov Random Fields. The dual objective of this problem can be treated as a concave and unconstrained, but nonsmooth function. The idea of smoothing the objective prior to optimization was recently proposed in a series of papers. Some of them suggested the idea to decrease the amount of smoothing (so called temperature) while getting closer to the optimum. However, no theoretical substantiation was provided. We propose an adaptive smoothing diminishing algorithm based on the duality gap between relaxed primal and dual objectives and demonstrate the efficiency of our approach with a smoothed version of Sequential TreeReweighted Message Passing (TRWS) algorithm. The strategy is applicable to other algorithms as well, avoids adhoc tuning of the smoothing during iterations, and provably guarantees convergence to the optimum. 1
1 Learning Graphical Model Parameters with Approximate Marginal Inference
"... Abstract—Likelihood basedlearning of graphical models faces challenges of computationalcomplexity and robustness to model misspecification. This paper studies methods that fit parameters directly to maximize a measure of the accuracy of predicted marginals, taking into account both model and infer ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Likelihood basedlearning of graphical models faces challenges of computationalcomplexity and robustness to model misspecification. This paper studies methods that fit parameters directly to maximize a measure of the accuracy of predicted marginals, taking into account both model and inference approximations at training time. Experiments on imaging problems suggest marginalizationbased learning performs better than likelihoodbased approximations on difficult problems where the model being fit is approximate in nature. 1