Results 1  10
of
41
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Parallel Variable Distribution
 SIAM Journal on Optimization
, 1994
"... We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a st ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a steepest descent, quasiNewton, or any arbitrary direction). This "forgetmenot" approach is a distinctive feature of our algorithm which has not been analyzed before. The parallelization step is followed by a fast synchronization step wherein the affine hull of the points computed by the parallel processors and the current point is searched for an optimal point. Convergence to a stationary point under continuous differentiability is established for the unconstrained case, as well as a linear convergence rate under the additional assumption of a Lipschitzian gradient and strong convexity. For problems constrained to lie in the Cartesian product of closed convex sets, convergence is establish...
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
Analysis and implementation of a dual algorithm for constrained optimization
 Journal of Optimization Theory and Applications
, 1993
"... Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of r ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Abstract. This paper analyzes a constrained optimization algorithm that combines an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Some of the issues that we focus on are the treatment of rigid constraints that must be satisfied during the iterations and techniques for balancing the error associated with constraint violation with the error associated with optimality. A preconditioner is constructed with the property that the rigid constraints are satisfied while illconditioning due to penalty terms is alleviated. Various numerical linear algebra techniques required for the efficient implementation of the algorithm are presented, and convergence behavior is illustrated in a series of numerical experiments.
Parallel Constraint Distribution
 SIAM Journal on Optimization
, 1991
"... . Constraints of a mathematical program are distributed among parallel processors together with an appropriately constructed augmented Lagrangian for each processor, which contains Lagrangian information on the constraints handled by the other processors. Lagrange multiplier information is then exch ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
. Constraints of a mathematical program are distributed among parallel processors together with an appropriately constructed augmented Lagrangian for each processor, which contains Lagrangian information on the constraints handled by the other processors. Lagrange multiplier information is then exchanged between processors. Convergence is established under suitable conditions for strongly convex quadratic programs and for general convex programs. Key words. Parallel Optimization, Augmented Lagrangians, Quadratic Programs, Convex Programs 1. Introduction. We are concerned with the problem minimize f(x) subject to g 1 (x) 0; . . . ; g k (x) 0 (1.1) where f , g 1 ; . . . ; g k are differentiable convex functions from the ndimensional real space IR n to IR, IR m 1 ; . . . ; IR m k respectively, with f being strongly convex on IR n . Our principal aim is to distribute the k constraint blocks among k parallel processors together with an appropriately modified objective functio...
On the convergence of augmented Lagrangian methods for constrained global optimization
 SIAM J. Optim
"... We analyze the local convergence rate of the augmented Lagrangian method in nonlinear semidefinite optimization. The presence of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functi ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We analyze the local convergence rate of the augmented Lagrangian method in nonlinear semidefinite optimization. The presence of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and variational analysis on the projection operator in the symmetric matrix space. Without requiring strict complementarity, we prove that, under the constraint nondegeneracy condition and the strong second order sufficient condition, the rate of convergence is linear and the ratio constant is proportional to 1/c, where c is the penalty parameter that exceeds a threshold c>0. Key words: The augmented Lagrangian method, nonlinear semidefinite programming, rate of convergence, variational analysis.
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
Numerical Studies of Shape Optimization Problems in Elasticity using . . .
, 2001
"... this paper, the knowledge of its normal derivative suffices to evaluate the data appearing from the torsional rigidity. Invoking a Newton potential, the normal derivative can be represented by a DirichlettoNeumann map based on boundary integral operators, namely the single layer operator and the d ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
this paper, the knowledge of its normal derivative suffices to evaluate the data appearing from the torsional rigidity. Invoking a Newton potential, the normal derivative can be represented by a DirichlettoNeumann map based on boundary integral operators, namely the single layer operator and the double layer operator. The application of boundary elements for the discretization requires only a partition of the boundary. Therefore, we do not need a triangulation of the domain like for finite elements. In general, boundary element methods suffer from a major disadvantage. The corresponding system matrices are densely populated. Therefore, the complexity for solving such equations grows at least quadratic with the number of equations. This fact restricts the maximal size of the linear equations seriously. Modern methods for the fast solution of BEM reduce the complexity to a suboptimal rate or even an optimal rate, that is a linear rate. Prominent examples for such methods are the fast multipole method by Greengard and Rokhlin [18] and the panel clustering by Hackbusch and Novack [20]. Observed first by Beylkin, Coifman and Rokhlin [4], the wavelet Galerkin scheme offers another tool for the fast solution of integral equations. In fact, a Galerkin discretization based on wavelet bases results in numerically sparse matrices, i.e., many matrix entries are negligible and can be treated as zero. Discarding these nonrelevant matrix entries is called matrix compression. In accordance with Dahmen et al. [7, 10, 9, 28], this can be performed without compromising the accuracy of the underlying Galerkin scheme. As shown by Dahmen, Harbrecht and Schneider in [7, 23, 28], the wavelet Gelerkin scheme has an optimal overall complexity. The paper is organized as follows. Section 1 is...
Low OrderValue Optimization and Applications
, 2005
"... Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with the parameters x ∈ IR n, it is natural to define Fi(x) = (T (x, ti) − yi) 2 (the quadratic error at observation i under the parameters x). When p = r this LOVO problem coincides with the classical nonlinear leastsquares problem. However, the interesting situation is when p is smaller than r. In that case, the solution of LOVO allows one to discard the influence of an estimated number of outliers. Thus, the LOVO problem is an interesting tool for robust estimation of parameters of nonlinear models. When p ≪ r the LOVO problem may be used to find hidden structures in data sets. One of the best succeeded applications include the Protein Alignment problem. Fully documented algorithms for this application are available at www.ime.unicamp.br/∼martinez/lovoalign. In this paper optimality conditions are discussed, algorithms for solving the LOVO problem are introduced and convergence theorems are proved. Finally, numerical experiments are presented.
Secondorder negativecurvature methods for boxconstrained and general constrained optimization
, 2009
"... A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (PowellHestenesRockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to secondorder stationary points in situations in which firstorder methods fail are exhibited.