Results 1  10
of
11
An InteriorPoint Algorithm for LargeScale Nonlinear Optimization with Inexact
 Step Computations, SIAMJournalonScientificComputing
"... Abstract. We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subprobl ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the search direction (i.e., inexact subproblem solution) must satisfy so that global convergence of the algorithm for solving the nonlinear problem is guaranteed. The algorithm can be viewed as a globally convergent inexact Newtonbased method. The results of numerical experiments are provided to illustrate the reliability and efficiency of the proposed numerical method.
Multilevel algorithms for largescale interior point methods in bound constrained optimization, tech
, 2006
"... Abstract. We develop and compare multilevel algorithms for solving bound constrained nonlinear variational problems via interior point methods. Several equivalent formulations of the linear systems arising at each iteration of the interior point method are compared from the point of view of conditio ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We develop and compare multilevel algorithms for solving bound constrained nonlinear variational problems via interior point methods. Several equivalent formulations of the linear systems arising at each iteration of the interior point method are compared from the point of view of conditioning and iterative solution. Furthermore, we show how a multilevel continuation strategy can be used to obtain good initial guesses (“hot starts”) for each nonlinear iteration. A minimal surface problem is used to illustrate the various approaches. 1. Introduction. In
A SECOND DERIVATIVE SQP METHOD: THEORETICAL ISSUES ∗
, 2008
"... Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particul ..."
Abstract
 Add to MetaCart
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a secondderivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descentconstraint is imposed on certain QP subproblems, which “guides ” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established. Key words. Nonlinear programming, nonlinear inequality constraints, sequential quadratic programming, ℓ1penalty function, nonsmooth optimization
A SECONDDERIVATIVE TRUSTREGION SQP METHOD WITH A “TRUSTREGIONFREE ” PREDICTOR STEP ∗
, 2009
"... ..."
A SECOND DERIVATIVE SQP METHOD: THEORETICAL ISSUES ∗
, 2008
"... Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particul ..."
Abstract
 Add to MetaCart
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a secondderivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descentconstraint is imposed on certain QP subproblems, which “guides ” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established. Key words. Nonlinear programming, nonlinear inequality constraints, sequential quadratic programming, ℓ1penalty function, nonsmooth optimization
A SECOND DERIVATIVE SQP METHOD WITH IMPOSED DESCENT ∗
, 2008
"... Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particul ..."
Abstract
 Add to MetaCart
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a secondderivative Sℓ1QP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent constraint is imposed on certain QP subproblems, which “guides ” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established. Key words. Nonlinear programming, nonlinear inequality constraints, sequential quadratic programming, ℓ1 penalty function, nonsmooth optimization
Problem (NLP):
"... The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective func ..."
Abstract
 Add to MetaCart
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivativefree methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.
WITH RAPID INFEASIBILITY DETECTION
"... Abstract. We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guarant ..."
Abstract
 Add to MetaCart
Abstract. We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A twophase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first phase subproblem determines the highest level of improvement in linearized feasibility that can be attained locally. The second phase subproblem then seeks optimality in such a way that the resulting search direction attains a level of improvement in linearized feasibility that is proportional to that attained in the first phase. The subproblem formulations and parameter updates ensure that near an optimal solution, the algorithm reduces to a classical SQO method for optimization, and near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under common assumptions and numerical results are presented for a large set of test problems.
Noname manuscript No. (will be inserted by the editor) A PenaltyInteriorPoint Algorithm for Nonlinear Constrained Optimization
, 2011
"... Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid ..."
Abstract
 Add to MetaCart
Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid infeasibility detection. Interiorpoint methods have become the workhorse in largescale optimization due to their Newtonlike qualities, both in terms of their scalability and convergence behavior. Each of these two strategies, however, have certain disadvantages that make their use either impractical or inefficient for certain classes of problems. The goal of this paper is to present a penaltyinteriorpoint method that possesses the advantages of penalty and interiorpoint techniques, but does not suffer from their disadvantages. Numerous attempts have been made along these lines in recent years, each with varying degrees of success. The novel feature of the algorithm in this paper is that our focus is not only on the formulation of the penaltyinteriorpoint subproblem itself, but on the design of updates for the penalty and interiorpoint parameters. The updates we propose are designed so that rapid convergence to a solution of the nonlinear optimization problem or an infeasible stationary point is attained. We motivate the convergence properties of our algorithm and illustrate its practical performance on a large set of problems, including sets of problems that exhibit degeneracy or are infeasible.
(will be inserted by the editor) Adaptive Augmented Lagrangian Methods for LargeScale Equality Constrained Optimization
, 2012
"... Abstract We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating sch ..."
Abstract
 Add to MetaCart
Abstract We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its attractive local convergence behavior and ability to be implemented matrixfree. This latter strength is particularly important due to interests in employing augmented Lagrangian algorithms for solving largescale optimization problems. We focus on a trust region algorithm, but also propose a line search algorithm that employs the same adaptive penalty parameter updating scheme. We provide theoretical results related to the global convergence behavior of our algorithms and illustrate by a set of numerical experiments that they outperform traditional augmented Lagrangian methods in terms of critical performance measures. Keywords nonlinear optimization · nonconvex optimization · largescale