Results 1  10
of
17
On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing
 In Proc. EMNLP
, 2010
"... This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamicprogramming algorithms as oracle solvers for subproblems, together with a simple method for forcing agreement between the different oracles. The approa ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamicprogramming algorithms as oracle solvers for subproblems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, in that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram partofspeech tagger. 1
Reformulation and decomposition of integer programs
"... In this survey we examine ways to reformulate integer and mixed integer programs. Typically, but not exclusively, one reformulates so as to obtain stronger linear programming relaxations, and hence better bounds for use in a branchandbound based algorithm. First we cover in detail reformulations b ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
In this survey we examine ways to reformulate integer and mixed integer programs. Typically, but not exclusively, one reformulates so as to obtain stronger linear programming relaxations, and hence better bounds for use in a branchandbound based algorithm. First we cover in detail reformulations based on decomposition, such as Lagrangean relaxation, DantzigWolfe and the resulting column generation and branchandprice algorithms. This is followed by an examination of Benders ’ type algorithms based on projection. Finally we discuss in detail extended formulations involving additional variables that are based on problem structure. These can often be used to provide strengthened a priori formulations. Reformulations obtained by adding cutting planes in the original variables are not treated here. 1
The Steiner tree polytope and related polyhedra
, 1994
"... We consider the vertexweighted version of the undirected Steiner tree problem. In this problem, a cost is incurred both for the vertices and the edges present in the Steiner tree. We completely describe the associated polytope by linear inequalities when the underlying graph is seriesparallel. For ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
We consider the vertexweighted version of the undirected Steiner tree problem. In this problem, a cost is incurred both for the vertices and the edges present in the Steiner tree. We completely describe the associated polytope by linear inequalities when the underlying graph is seriesparallel. For general graphs, this formulation can be interpreted as a (partial) extended formulation for the Steiner tree problem. By projecting this formulation, we obtain some very large classes of facetdefining valid inequalities for the Steiner tree polytope.
A Catalog of Steiner Tree Formulations
, 1993
"... We present some existing and some new formulations for the Steiner tree and Steiner arborescence problems. We show the equivalence of many of these formulations. In particular, we establish the equivalence between the classical bidirected dicut relaxation and two vertex weighted undirected relaxatio ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
We present some existing and some new formulations for the Steiner tree and Steiner arborescence problems. We show the equivalence of many of these formulations. In particular, we establish the equivalence between the classical bidirected dicut relaxation and two vertex weighted undirected relaxations. The motivation behind this study is a characterization of the feasible region of the dicut relaxation in the natural space corresponding to the Steiner tree problem.
Exact Decoding of Syntactic Translation Models through Lagrangian Relaxation
"... We describe an exact decoding algorithm for syntaxbased statistical translation. The approach uses Lagrangian relaxation to decompose the decoding problem into tractable subproblems, thereby avoiding exhaustive dynamic programming. The method recovers exact solutions, with certificates of optimalit ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
We describe an exact decoding algorithm for syntaxbased statistical translation. The approach uses Lagrangian relaxation to decompose the decoding problem into tractable subproblems, thereby avoiding exhaustive dynamic programming. The method recovers exact solutions, with certificates of optimality, on over 97 % of test examples; it has comparable speed to stateoftheart decoders. 1
A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing
"... Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, descr ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, describe formal guarantees for the method, and describe practical issues in implementing the algorithms. While our examples are predominantly drawn from the NLP literature, the material should be of general relevance to inference problems in machine learning. A central theme of this tutorial is that Lagrangian relaxation is naturally applied in conjunction with a broad class of combinatorial algorithms, allowing inference in models that go significantly beyond previous work on Lagrangian relaxation for inference in graphical models. 1.
Extended Formulations for Packing and Partitioning Orbitopes
 Mathematics of Operations Research
"... Abstract. We give compact extended formulations for the packing and partitioning orbitopes (with respect to the full symmetric group) described and analyzed in [6]. These polytopes are the convex hulls of all 0/1matrices with lexicographically sorted columns and at most, resp. exactly, one 1entry ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We give compact extended formulations for the packing and partitioning orbitopes (with respect to the full symmetric group) described and analyzed in [6]. These polytopes are the convex hulls of all 0/1matrices with lexicographically sorted columns and at most, resp. exactly, one 1entry per row. They are important objects for symmetry reduction in certain integer programs. Using the extended formulations, we also derive a rather simple proof of the fact [6] that basically shiftedcolumn inequalities suffice in order to describe those orbitopes linearly. 1.
The mixing set with divisible capacities: a simple approach
, 2008
"... We give a simple algorithm for linear optimization over the mixing set with divisible capacities, and derive a compact extended formulation from such an algorithm. The main idea is to apply a suitable unimodular transformation to obtain an equivalent problem that is easier to analyze. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We give a simple algorithm for linear optimization over the mixing set with divisible capacities, and derive a compact extended formulation from such an algorithm. The main idea is to apply a suitable unimodular transformation to obtain an equivalent problem that is easier to analyze.
MAP Inference in Chains using Column Generation
"... Linear chains and trees are basic building blocks in many applications of graphical models, and they admit simple exact maximum aposteriori (MAP) inference algorithms based on message passing. However, in many cases this computation is prohibitively expensive, due to quadratic dependence on variabl ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Linear chains and trees are basic building blocks in many applications of graphical models, and they admit simple exact maximum aposteriori (MAP) inference algorithms based on message passing. However, in many cases this computation is prohibitively expensive, due to quadratic dependence on variables ’ domain sizes. The standard algorithms are inefficient because they compute scores for hypotheses for which there is strong negative local evidence. For this reason there has been significant previous interest in beam search and its variants; however, these methods provide only approximate results. This paper presents new exact inference algorithms based on the combination of column generation and precomputed bounds on terms of the model’s scoring function. While we do not improve worstcase performance, our method substantially speeds realworld, typicalcase inference in chains and trees. Experiments show our method to be twice as fast as exact Viterbi for Wall Street Journal partofspeech tagging and over thirteen times faster for a joint partofspeed and namedentityrecognition task. Our algorithm is also extendable to new techniques for approximate inference, to faster 0/1 loss oracles, and new opportunities for connections between inference and learning. We encourage further exploration of highlevel reasoning about the optimization problem implicit in dynamic programs. 1
Column Generation for Extended Formulations
, 2013
"... Working in an extended variable space allows one to develop tighter reformulations for mixed integer programs. However, the size of the extended formulation grows rapidly too large for a direct treatment by a MIPsolver. Then, one can work with inner approximations defined and improved by generating ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Working in an extended variable space allows one to develop tighter reformulations for mixed integer programs. However, the size of the extended formulation grows rapidly too large for a direct treatment by a MIPsolver. Then, one can work with inner approximations defined and improved by generating dynamically variables and constraints. When the extended formulation stems from subproblems’ reformulations, one can implement column generation for the extended formulation using a DantzigWolfe decomposition paradigm. Pricing subproblem solutions are expressed in the variables of the extended formulation and added to the current restricted version of the extended formulation along with the subproblem constraints that are active for the subproblem solutions. This socalled “columnandrow generation” procedure is revisited here in a unifying presentation that generalizes the column generation algorithm and extends to the case of working with an approximate extended formulation. The interest of the approach is evaluated numerically on machine scheduling, bin packing, generalized assignment, and multiechelon lotsizing problems. We compare a direct handling of the extended formulation, a standard column generation approach, and the “columnandrow generation ” procedure, highlighting a key benefit of the latter: lifting pricing problem solutions in the space of the extended formulation permits their recombination into new subproblem solutions and results in faster convergence.