Results 1  10
of
25
A NewtonCG augmented Lagrangian method for semidefinite programming
 SIAM J. Optim
"... Abstract. We consider a NewtonCG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresp ..."
Abstract

Cited by 64 (14 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a NewtonCG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresponding solution mapping at the origin. For the inner problems, we show that the positive definiteness of the generalized Hessian of the objective function in these inner problems, a key property for ensuring the efficiency of using an inexact semismooth NewtonCG method to solve the inner problems, is equivalent to the constraint nondegeneracy of the corresponding dual problems. Numerical experiments on a variety of large scale SDPs with the matrix dimension n up to 4, 110 and the number of equality constraints m up to 2, 156, 544 show that the proposed method is very efficient. We are also able to solve the SDP problem fap36 (with n = 4, 110 and m = 1, 154, 467) in the Seventh DIMACS Implementation Challenge much more accurately than previous attempts.
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
Correlation stress testing for valueatrisk: an unconstrained convex optimization approach
, 2010
"... ..."
An augmented Lagrangian dual approach for the Hweighted nearest correlation matrix problem
, 2010
"... ..."
(Show Context)
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
A Modified Alternating Direction Method for Convex Quadratically Constrained Quadratic Semidefinite Programs
, 2009
"... We propose a modied alternate direction method for solving convex quadratically constrained quadratic semidefinite optimization problems. The method is a first order method, therefore requires much less computational effort per iteration than the secondorder approaches such as the interior point me ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We propose a modied alternate direction method for solving convex quadratically constrained quadratic semidefinite optimization problems. The method is a first order method, therefore requires much less computational effort per iteration than the secondorder approaches such as the interior point methods or the smoothing Newton methods. At each iteration only a single inexact metric projection onto the positive semidefinite cone is required. We prove the global convergence of this method.
Glare Point
 Applied Optics
, 1991
"... Solving logdeterminant optimization problems by a NewtonCG primal proximal ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Solving logdeterminant optimization problems by a NewtonCG primal proximal
A PROXIMAL POINT ALGORITHM FOR LOGDETERMINANT OPTIMIZATION WITH GROUP LASSO REGULARIZATION
"... We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regul ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regularization. We propose to solve the resulting logdeterminant optimization problem by the classical proximal point algorithm (PPA). At each iteration, as it is difficult to update the primal variables directly, we first solve the dual subproblem by a NewtonCG method and then update the primal variables by explicit formulas based on the computed dual variables. We also propose to accelerate the PPA by an inexact generalized Newton’s method when the iterate is close to the solution. Theoretically, we prove that, at the optimal solution, the negative definiteness of the generalized Hessian matrices of the dual objective function is equivalent to the constraint nondegeneracy condition for the primal problem. Global and local convergence results are also presented for the proposed PPA. Moreover, based on the augmented Lagrangian function of the dual problem we derive an alternating direction method (ADM), which is easily implementable, and demonstrated to be efficient for some random problems. Numerical results, including comparisons with the ADM, are presented to demonstrate that the proposed NewtonCG based PPA is stable, efficient and, in particular, outperforms the ADM, especially when higher accuracy is required.
Numerical Algorithms for a Class of Matrix Norm Approximation Problems
, 2012
"... This thesis focuses on designing robust and efficient algorithms for a class of matrix norm approximation (MNA) problems that are to find an affine combination of given matrices having the minimal spectral norm subject to some prescribed linear equality and inequality constraints. These problems a ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This thesis focuses on designing robust and efficient algorithms for a class of matrix norm approximation (MNA) problems that are to find an affine combination of given matrices having the minimal spectral norm subject to some prescribed linear equality and inequality constraints. These problems arise often in numerical algebra,
Augmented Lagrangians with possible infeasibility and finite termination for global nonlinear programming
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results that guarantee optimality and/or feasibility up to any required precision will be provided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.