Results 1  10
of
22
Tighter Relaxations for MAPMRF Inference: A Local PrimalDual Gap based Separation Algorithm
"... We propose an efficient and adaptive method for MAPMRF inference that provides increasingly tighter upper and lower bounds on the optimal objective. Similar to Sontag et al. (2008b), our method starts by solving the firstorder LOCAL(G) linear programming relaxation. This is followed by an adaptive ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We propose an efficient and adaptive method for MAPMRF inference that provides increasingly tighter upper and lower bounds on the optimal objective. Similar to Sontag et al. (2008b), our method starts by solving the firstorder LOCAL(G) linear programming relaxation. This is followed by an adaptive tightening of the relaxation where we incrementally add higherorder interactions to enforce proper marginalization over groups of variables. Computing the best interaction to add is an NPhard problem. We show good solutions to this problem can be readily obtained from “local primaldual gaps ” given the current primal solution and a dual reparameterization vector. This is not only extremely efficient, but in contrast to previous approaches, also allows us to search over prohibitively large sets of candidate interactions to add. We demonstrate the superiority of our approach on MAPMRF inference problems encountered in computer vision. 1
Automorphism groups of graphical models and lifted variational inference
"... Using the theory of group action, we first introduce the concept of the automorphism group of an exponential family or a graphical model, thus formalizing the general notion of symmetry of a probabilistic model. This automorphism group provides a precise mathematical framework for lifted inference i ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Using the theory of group action, we first introduce the concept of the automorphism group of an exponential family or a graphical model, thus formalizing the general notion of symmetry of a probabilistic model. This automorphism group provides a precise mathematical framework for lifted inference in the general exponential family. Its group action partitions the set of random variables and feature functions into equivalent classes (called orbits) having identical marginals and expectations. Then the inference problem is effectively reduced to that of computing marginals or expectations for each class, thus avoiding the need to deal with each individual variable or feature. We demonstrate the usefulness of this general framework in lifting two classes of variational approximation for MAP inference: local LP relaxation and local LP relaxation with cycle constraints; the latter yields the first lifted inference that operate on a bound tighter than local constraints. Initial experimental results demonstrate that lifted MAP inference with cycle constraints achieved the state of the art performance, obtaining much better objective function values than local approximation while remaining relatively efficient. 1
Partial Optimality via Iterative Pruning for the Potts Model
"... Abstract. We propose a novel method to obtain a part of an optimal nonrelaxed integral solution for energy minimization problems with Potts interactions, known also as the minimal partition problem. The method empirically outperforms previous approaches like MQPBO and Kovtun’s method in most of ou ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a novel method to obtain a part of an optimal nonrelaxed integral solution for energy minimization problems with Potts interactions, known also as the minimal partition problem. The method empirically outperforms previous approaches like MQPBO and Kovtun’s method in most of our test instances and especially in hard ones. As a starting point our approach uses the solution of a commonly accepted convex relaxation of the problem. This solution is then iteratively pruned until our criterion for partial optimality is satisfied. Due to its generality our method can employ any solver for the considered relaxed problem.
Tighter Linear Program Relaxations for High Order Graphical Models
"... Graphical models with High Order Potentials (HOPs) have received considerable interest in recent years. While there are a variety of approaches to inference in these models, nearly all of them amount to solving a linear program (LP) relaxation with unary consistency constraints between the HOP and t ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Graphical models with High Order Potentials (HOPs) have received considerable interest in recent years. While there are a variety of approaches to inference in these models, nearly all of them amount to solving a linear program (LP) relaxation with unary consistency constraints between the HOP and the individual variables. In many cases, the resulting relaxations are loose, and in these cases the results of inference can be poor. It is thus desirable to look for more accurate ways of performing inference. In this work, we study the LP relaxations that result from enforcing additional consistency constraints between the HOP and the rest of the model. We address theoretical questions about the strength of the resulting relaxations compared to the relaxations that arise in standard approaches, and we develop practical and efficient message passing algorithms for optimizing the LPs. Empirically, we show that the LPs with additional consistency constraints lead to more accurate inference on some challenging problems that include a combination of low order and high order terms. 1
Tightening MRF Relaxations with Planar Subproblems
"... We describe a new technique for computing lowerbounds on the minimum energy configuration of a planar Markov Random Field (MRF). Our method successively adds large numbers of constraints and enforces consistency over binary projections of the original problem state space. These constraints are repr ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We describe a new technique for computing lowerbounds on the minimum energy configuration of a planar Markov Random Field (MRF). Our method successively adds large numbers of constraints and enforces consistency over binary projections of the original problem state space. These constraints are represented in terms of subproblems in a dualdecomposition framework that is optimized using subgradient techniques. The complete set of constraints we consider enforces cycle consistency over the original graph. In practice we find that the method converges quickly on most problems with the addition of a few subproblems and outperforms existing methods for some interesting classes of hard potentials.
Partial optimality by pruning for MAPinference with general graphical models
 In CVPR
, 2014
"... We consider the energy minimization problem for undirected graphical models, also known as MAPinference problem for Markov random fields which is NPhard in general. We propose a novel polynomial time algorithm to obtain a part of its optimal nonrelaxed integral solution. Our algorithm is initiali ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We consider the energy minimization problem for undirected graphical models, also known as MAPinference problem for Markov random fields which is NPhard in general. We propose a novel polynomial time algorithm to obtain a part of its optimal nonrelaxed integral solution. Our algorithm is initialized with variables taking integral values in the solution of a convex relaxation of the MAPinference problem and iteratively prunes those, which do not satisfy our criterion for partial optimality. We show that our pruning strategy is in a certain sense theoretically optimal. Also empirically our method outperforms previous approaches in terms of the number of persistently labelled variables. The method is very general, as it is applicable to models with arbitrary factors of an arbitrary order and can employ any solver for the considered relaxed problem. Our method’s runtime is determined by the runtime of the convex relaxation solver for the MAPinference problem. 1.
Neural Probabilistic Language Model for System Combination
"... This paper gives the system description of the neural probabilistic language modeling (NPLM) team of Dublin City University for our participation in the system combination task in the Second Workshop on Applying Machine Learning Techniques to Optimise the Division of Labour in Hybrid MT (ML4HMT12). ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
This paper gives the system description of the neural probabilistic language modeling (NPLM) team of Dublin City University for our participation in the system combination task in the Second Workshop on Applying Machine Learning Techniques to Optimise the Division of Labour in Hybrid MT (ML4HMT12). We used the information obtained by NPLM as meta information to the system combination module. For the SpanishEnglish data, our paraphrasing approach achieved 25.81 BLEU points, which lost 0.19 BLEU points absolute compared to the standard confusion networkbased system combination. We note that our current usage of NPLM is very limited due to the difficulty in combining NPLM and system combination.
Understanding the Bethe Approximation: When and How can it go Wrong?
"... Belief propagation is a remarkably effective tool for inference, even when applied to networks with cycles. It may be viewed as a way to seek the minimum of the Bethe free energy, though with no convergence guarantee in general. A variational perspective shows that, compared to exact inference, this ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Belief propagation is a remarkably effective tool for inference, even when applied to networks with cycles. It may be viewed as a way to seek the minimum of the Bethe free energy, though with no convergence guarantee in general. A variational perspective shows that, compared to exact inference, this minimization employs two forms of approximation: (i) the true entropy is approximated by the Bethe entropy, and (ii) the minimization is performed over a relaxation of the marginal polytope termed the local polytope. Here we explore when and how the Bethe approximation can fail for binary pairwise models by examining each aspect of the approximation, deriving results both analytically and with new experimental methods. 1
A MixturesofTrees Framework for MultiLabel Classification
"... We propose a new probabilistic approach for multilabel classification that aims to represent the class posterior distribution P (YX). Our approach uses a mixture of treestructured Bayesian networks, which can leverage the computational advantages of conditional treestructured models and the a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose a new probabilistic approach for multilabel classification that aims to represent the class posterior distribution P (YX). Our approach uses a mixture of treestructured Bayesian networks, which can leverage the computational advantages of conditional treestructured models and the abilities of mixtures to compensate for treestructured restrictions. We develop algorithms for learning the model from data and for performing multilabel predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several stateoftheart multilabel classification methods.