Results 1 
9 of
9
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
Trust Region Algorithms For Constrained Optimization
 Math. Prog
, 1990
"... We review the main techniques used in trust region algorithms for nonlinear constrained optimization. 1. Trust Region Idea Constrained optimization is to minimize a function subject to finitely many algebraic equation and inequality conditions. It has the following form min x2! n f(x) (1.1) subj ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
We review the main techniques used in trust region algorithms for nonlinear constrained optimization. 1. Trust Region Idea Constrained optimization is to minimize a function subject to finitely many algebraic equation and inequality conditions. It has the following form min x2! n f(x) (1.1) subject to c i (x) = 0; i = 1; 2; : : : ; m e ; (1.2) c i (x) 0; i = m e + 1; : : : ; m; (1.3) where f(x) and c i (x) (i = 1; : : : ; m) are real functions defined in ! n , and m m e are two nonnegative integers. Numerical methods for nonlinear optimization problems can be grouped as two types. One are line search methods and the other are trust region algorithms. Line search algorithms at each iteration use a direction to carry a line search. The direction is called the search direction, which is normally computed by solving a subproblem that approximates the original problem near the current iterate. A line search means to search for a new point along the search direction. For example, ...
Nonmonotone Line Search for Minimax Problems
, 1993
"... . It was recently shown that, in the solution of smooth constrained optimization problems by sequential quadratic programming (SQP), the Maratos effect can be prevented by means of a certain nonmonotone (more precisely, threestep or fourstep monotone) line search. Using a well known transformation ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
. It was recently shown that, in the solution of smooth constrained optimization problems by sequential quadratic programming (SQP), the Maratos effect can be prevented by means of a certain nonmonotone (more precisely, threestep or fourstep monotone) line search. Using a well known transformation, this scheme can be readily extended to the case of minimax problems. It turns out however that, due to the structure of these problems, one can use a simpler scheme. Such a scheme is proposed and analyzed in this paper. Numerical experiments indicate a significant advantage of the proposed line search over the (monotone) Armijo search. Key words. Minimax problems, SQP direction, Maratos effect, Superlinear convergence. 1 This research was supported in part by NSF's Engineering Research Centers Program No. NSFDCDR88 03012, by NSF grant No. DMC8815996 and by a grant from the Westinghouse Corporation. 2 To whom the correspondence should be addressed. 1. Introduction. Consider the "m...
A New Technique For Inconsistent QP Problems In The SQP Method
 University at Darmstadt, Department of Mathematics, preprint 1561, Darmstadt
, 1993
"... Successful treatment of inconsistent QP problems is of major importance in the SQP method, since such occur quite often even for well behaved nonlinear programming problems. This paper presents a new technique for regularizing inconsistent QP problems, which compromises in its properties between the ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Successful treatment of inconsistent QP problems is of major importance in the SQP method, since such occur quite often even for well behaved nonlinear programming problems. This paper presents a new technique for regularizing inconsistent QP problems, which compromises in its properties between the simple technique of Pantoja and Mayne [34] and the highly successful, but expensive one of Tone [44]. Global convergence of a corresponding algorithm is shown under reasonable weak conditions. Numerical results are reported which show that this technique, combined with a special method for the case of regular subproblems, is quite competitive to highly appreciated established ones. Key words: sequential quadratic programming, SQP method, nonlinear programming AMS(MOS) subject classification: primary 90C30, secondary 65K05 1 NOTATION Superscripts on a vector denote elements of sequences. All vectors are column vectors. For a vectorvalued function g rg(x) denotes the transposed Jacobian eval...
A GaussNewton Method for Convex Composite Optimization
, 1993
"... An extension of the GaussNewton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization of h ffi F under two conditions, namely h has a set of weak sharp minima, C, and there is a regular point of th ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
An extension of the GaussNewton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization of h ffi F under two conditions, namely h has a set of weak sharp minima, C, and there is a regular point of the inclusion F (x) 2 C. This result extends a similar convergence result due to Womersley which employs the assumption of a strongly unique solution of the composite function h ffi F . A backtracking line search is proposed as a globalization strategy. For this algorithm, a global convergence result is established, with a quadratic rate under the regularity assumption. This material is based on research supported by National Science Foundation Grants CCR9157632 and DMS9102059 and Air Force Office of Scientific Research Grant AFOSR890410 y Department of Mathematics, GN50, University of Washington, Seattle, Washington 98195 z Computer Sciences Department, University of Wisconsin, ...
A Review of Trust Region Algorithms for Optimization
"... Iterative methods for optimization can be classified into two categories: line search methods and trust region methods. In this paper we give a review on trust region algorithms for nonlinear optimization. Trust region methods are robust, and can be applied to illconditioned problems. A model trust ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Iterative methods for optimization can be classified into two categories: line search methods and trust region methods. In this paper we give a review on trust region algorithms for nonlinear optimization. Trust region methods are robust, and can be applied to illconditioned problems. A model trust region algorithm is presented to demonstrate the trust region approaches. Various trust region subproblems and their properties are presented. Convergence properties of trust region algorithms are given. Techniques such as backtracking, nonmonotone and second order correction are also briefly discussed. 1 Introduction Nonlinear optimization problems have the form min x2! n f(x) (1.1) s: t: c i (x) = 0; i = 1; 2; : : : ; m e ; (1.2) c i (x) 0; i = m e + 1; : : : ; m; (1.3) where f(x) and c i (x) (i = 1; : : : ; m) are real functions defined in ! n , at least one of these functions is nonlinear, and m m e are two nonnegative integers. If m = m e = 0, problem (1.1) is an unconstrain...
Trust Region Algorithms for Nonlinear Equations
, 1994
"... In this paper, we consider the problem of solving nonlinear equations F (x) = 0, where F (x) from ! n to ! m is continuously differentiable. We study a class of general trust region algorithms for solving nonlinear equation by minimizing a given norm jjF (x)jj. The trust region algorithm for no ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper, we consider the problem of solving nonlinear equations F (x) = 0, where F (x) from ! n to ! m is continuously differentiable. We study a class of general trust region algorithms for solving nonlinear equation by minimizing a given norm jjF (x)jj. The trust region algorithm for nonlinear equations can be viewed as an extension of the LevenbergMarquardt algorithm for nonlinear least squares. Global convergence of trust region algorithms for nonlinear equations are studied and local convergence analyses are also given. Key words: nonlinear equation, trust region, convergence. 1. Introduction We consider the problem of solving nonlinear equations: f i (x) = 0; i = 1; :::; m (1.1) where f i (x) are nonlinear functions defined in ! n . The system is called an overdetermined system if m ? n, an underdetermined system if m ! n. Even if m = n, due to the nonlinearity of f i (x), system (1.1) may have no solutions. Hence, it is usual to minimize the residual: min x2! n...
Matrix Computation Problems in Trust Region Algorithms for Optimization
, 1998
"... Trust region algorithms are a class of recently developed algorithms for solving optimization problems. The subproblems appeared in trust region algorithms are usually minimizing a quadratic function subject to one or two quadratic constraints. In this paper we review some of the widely used trust r ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Trust region algorithms are a class of recently developed algorithms for solving optimization problems. The subproblems appeared in trust region algorithms are usually minimizing a quadratic function subject to one or two quadratic constraints. In this paper we review some of the widely used trust region subproblems and some matrix computation problems related to these trust region subproblems. Key words: optimization, trust region subproblem, matrix computation. 1. Introduction Trust region algorithms are a class of recently developed algorithms for solving optimization problems. At each iteration of a trust region algorithm, a trial step is computed by solving a trust region subproblem, which is normally an approximation to the original optimization problem with a trust region constraint which prevents the trial step being too large. Usually, the trust region constraint has the form: kdk \Delta (1.1) where \Delta ? 0 is the trust region bound. For unconstrained optimization, the ...
The Sequential Quadratic Programming Method
, 2007
"... Sequential (or Successive) Quadratic Programming (SQP) is a technique for the solution of Nonlinear Programming (NLP) problems. It is, as we shall see, an idealized concept, permitting and indeed necessitating many variations and modifications before becoming available as part of a reliable and effi ..."
Abstract
 Add to MetaCart
Sequential (or Successive) Quadratic Programming (SQP) is a technique for the solution of Nonlinear Programming (NLP) problems. It is, as we shall see, an idealized concept, permitting and indeed necessitating many variations and modifications before becoming available as part of a reliable and efficient production computer code. In this monograph we