Results 1  10
of
365
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
, 2010
"... ..."
(Show Context)
Branchandprice: Column generation for solving huge integer programs
 OPER. RES
, 1998
"... We discuss formulations of integer programs with a huge number of variables and their solution by column generation methods, i.e., implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a node of the branchandbound tree. We present classes of models for which t ..."
Abstract

Cited by 347 (13 self)
 Add to MetaCart
(Show Context)
We discuss formulations of integer programs with a huge number of variables and their solution by column generation methods, i.e., implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a node of the branchandbound tree. We present classes of models for which this approach decomposes the problem, provides tighter LP relaxations, and eliminates symmetry. We then discuss computational issues and implementation of column generation, branchandbound algorithms, including special branching rules and efficient ways to solve the LP relaxation. We also discuss the relationship with Lagrangian duality.
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
 IEEE TRANSACTIONS ON AUTOMATIC CONTROL
, 2010
"... The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multiagen ..."
Abstract

Cited by 92 (12 self)
 Add to MetaCart
The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multiagent coordination, estimation in sensor networks, and largescale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network and confirm this prediction’s sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication as well as problems with stochastic optimization and/or communication.
On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing
 In Proc. EMNLP
, 2010
"... This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamicprogramming algorithms as oracle solvers for subproblems, together with a simple method for forcing agreement between the different oracles. The approa ..."
Abstract

Cited by 75 (4 self)
 Add to MetaCart
(Show Context)
This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamicprogramming algorithms as oracle solvers for subproblems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, in that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram partofspeech tagger. 1
Column Generation
 CONTRIBUTED TO THE WILEY ENCYCLOPEDIA OF OPERATIONS RESEARCH AND MANAGEMENT SCIENCE (EORMS)
, 2010
"... Column generation is an indispensable tool in computational optimization to solve a mathematical program by iteratively adding the variables of the model. Even though the method is simple in theory there are many algorithmic choices and we discuss the most common ones. Particular emphasis in put on ..."
Abstract

Cited by 75 (3 self)
 Add to MetaCart
Column generation is an indispensable tool in computational optimization to solve a mathematical program by iteratively adding the variables of the model. Even though the method is simple in theory there are many algorithmic choices and we discuss the most common ones. Particular emphasis in put on the dual interpretation, relating column generation to Langrangian relaxation and cutting plane algorithms, which revealed several critical issues like the need for dual variable stabilization techniques. We conclude with some advise for computer implementations.
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
Sequential and parallel algorithms for mixed packing and covering
 IN 42ND ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2001
"... We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems). For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a ¦ ¯ factor in Ç ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems). For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a ¦ ¯ factor in Ç Ñ � ÐÓ � Ñ � ¯ time, where Ñ is the number of constraints and � is the maximum number of constraints any variable appears in. Our parallel algorithm runs in time polylogarithmic in the input size times ¯ � and uses a total number of operations comparable to the sequential algorithm. The main contribution is that the algorithms solve mixed packing and covering problems (in contrast to pure packing or pure covering problems, which have only “� ” or only “� ” inequalities, but not both) and run in time independent of the socalled width of the problem.
Massive data discrimination via linear support vector machines, Optimization methods and software 13,110,
, 2000
"... ..."
(Show Context)
Permuting Sparse Rectangular Matrices into BlockDiagonal Form
 SIAM Journal on Scientific Computing
, 2002
"... We investigate the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for solving the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. W ..."
Abstract

Cited by 56 (18 self)
 Add to MetaCart
(Show Context)
We investigate the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for solving the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose bipartite graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Our experiments on a wide range of matrices, using stateoftheart graph and hypergraph partitioning tools MeTiS and PaToH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and runtime.
On Augmented Lagrangian Decomposition Methods For Multistage Stochastic Programs
, 1994
"... A general decomposition framework for large convex optimization problems based on augmented Lagrangians is described. The approach is then applied to multistage stochastic programming problems in two different ways: by decomposing the problem into scenarios or decomposing it into nodes corresponding ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
A general decomposition framework for large convex optimization problems based on augmented Lagrangians is described. The approach is then applied to multistage stochastic programming problems in two different ways: by decomposing the problem into scenarios or decomposing it into nodes corresponding to stages. In both cases the method has favorable convergence properties and a structure which makes it convenient for parallel computing environments. Keywords: Stochastic Programming, Decomposition, Augmented Lagrangian, Jacobi Method, Parallel Computation. iii iv On Augmented Lagrangian Decomposition Methods For Multistage Stochastic Programs Andrzej Ruszczy'nski 1 Introduction Multistage stochastic optimization problems belong to the most difficult problems of mathematical programming. Their size grows very quickly with the number of stages and with the number of events (scenarios) incorporated into the model. Although problems of this type occur frequently in applications (like,...