Results 1  10
of
13
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
, 2010
"... ..."
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 320 (25 self)
 Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
On the convergence of augmented Lagrangian methods for constrained global optimization
 SIAM J. Optim
"... We analyze the local convergence rate of the augmented Lagrangian method in nonlinear semidefinite optimization. The presence of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functi ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
We analyze the local convergence rate of the augmented Lagrangian method in nonlinear semidefinite optimization. The presence of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and variational analysis on the projection operator in the symmetric matrix space. Without requiring strict complementarity, we prove that, under the constraint nondegeneracy condition and the strong second order sufficient condition, the rate of convergence is linear and the ratio constant is proportional to 1/c, where c is the penalty parameter that exceeds a threshold c>0. Key words: The augmented Lagrangian method, nonlinear semidefinite programming, rate of convergence, variational analysis.
Analysis and implementation of a dual algorithm for constrained optimization
 Journal of Optimization Theory and Applications
, 1993
"... Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of r ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of rigid constraints that must be satisfied during the iterations and techniques for balancing the error associated with constraint violation with the error associated with optimality. A preconditioner is constructed with the property that the rigid constraints are satisfied while illconditioning due to penalty terms is alleviated. Various numerical linear algebra techniques required for the efficient implementation of the algorithm are presented, and convergence behavior is illustrated in a series of numerical experiments.
A globally convergent Lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds
 MATH. OF COMPUTATION
, 1997
"... We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are combine ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are combined in a Lagrangian barrier function. A sequence of such functions are approximately minimized within the domain defined by the simple bounds. Global convergence of the sequence of generated iterates to a firstorder stationary point for the original problem is established. Furthermore, possible numerical difficulties associated with barrier function methods are avoided as it is shown that a potentially troublesome penalty parameter is bounded away from zero. This paper is a companion to previous work of ours on augmented Lagrangian methods.
An algebraic analysis of a block diagonal preconditioner for saddle point problems
 SIAM J. MATRIX ANAL. APPL
, 2006
"... We consider a positive definite block preconditioner for solving saddle point linear systems. An approach based on augmenting the (1,1) block while keeping its condition number small is described, and algebraic analysis is performed. Ways of selecting the parameters involved are discussed, and anal ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We consider a positive definite block preconditioner for solving saddle point linear systems. An approach based on augmenting the (1,1) block while keeping its condition number small is described, and algebraic analysis is performed. Ways of selecting the parameters involved are discussed, and analytical and numerical observations are given.
Augmented Lagrangian Techniques for Solving Saddle Point Linear Systems
 SIAM J. Matrix Anal. Appl
, 2004
"... We perform an algebraic analysis of a generalization of the augmented Lagrangian method for solution of saddle point linear systems. It is shown that in cases where the (1,1) block is singular, specifically semidefinite, a lowrank perturbation that minimizes the condition number of the perturbed ma ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We perform an algebraic analysis of a generalization of the augmented Lagrangian method for solution of saddle point linear systems. It is shown that in cases where the (1,1) block is singular, specifically semidefinite, a lowrank perturbation that minimizes the condition number of the perturbed matrix while maintaining sparsity is an e#ective approach. The vectors used for generating the perturbation are columns of the constraint matrix that form a small angle with the nullspace of the original (1,1) block. Block preconditioning techniques of a similar flavor are also discussed and analyzed, and the theoretical observations are illustrated and validated by numerical results.
Global Linear Convergence of an Augmented Lagrangian Algorithm to Solve Convex Quadratic Optimization Problems
"... This contribution is dedicated to Claude Lemaréchal, a friend and a mountaineering companion of the second author, on the occasion of his sixtieth birthday ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
This contribution is dedicated to Claude Lemaréchal, a friend and a mountaineering companion of the second author, on the occasion of his sixtieth birthday
Convergence Analysis of the Augmented Lagrangian Method for Nonlinear SecondOrder Cone Optimization Problems
, 2006
"... The paper focuses on the convergence rate of the augmented Lagrangian method for nonlinear secondorder cone optimization problems. Under a set of assumptions of sufficient conditions, including the componentwise strict complementarity condition, the constraint nondegeneracy condition and the second ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The paper focuses on the convergence rate of the augmented Lagrangian method for nonlinear secondorder cone optimization problems. Under a set of assumptions of sufficient conditions, including the componentwise strict complementarity condition, the constraint nondegeneracy condition and the second order sufficient condition, we first study some properties of the augmented Lagrangian and then show that the rate of local convergence of the augmented Lagrangian method is proportional to 1/τ, where the penalty parameter τ is not less than a threshold ˆτ> 0.
AN ITERATIVE SUBSTRUCTURING METHOD WITH LAGRANGE MULTIPLIERS
"... We consider the following Poisson model problem with the homogeneous Dirichlet boundary condition −∆u = f in Ω, u = 0 on ∂Ω, (1) where Ω is a bounded polygonal domain in R2 and f is a given function in L2(Ω). For the sake of simplicity, we assume that Ω is partitioned into two subdomains (Ωi)i=1,2 s ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the following Poisson model problem with the homogeneous Dirichlet boundary condition −∆u = f in Ω, u = 0 on ∂Ω, (1) where Ω is a bounded polygonal domain in R2 and f is a given function in L2(Ω). For the sake of simplicity, we assume that Ω is partitioned into two subdomains (Ωi)i=1,2 such that Ω =