Results 1 
7 of
7
On the convergence of the Newton/logbarrier method
 Preprint ANL/MCSP681 0897, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newt ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton's method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood of the minimizer. By partitioning according to the subspace of active constraint gradients, however, we show that this neighborhood is actually quite large, thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/logbarrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the convergence criterion for each Newton process. 1.
Evolutionary computation techniques for nonlinear programming problems
 International Transactions of Operational Research
, 1994
"... zbyszek�mosaic.uncc.edu The paper presents several evolutionary computation techniques and discusses their applicability to nonlinear programming problems. On the basis of this presen� tation we discuss also a construction of a new hybrid optimization system � Genocop II � and present its experiment ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
zbyszek�mosaic.uncc.edu The paper presents several evolutionary computation techniques and discusses their applicability to nonlinear programming problems. On the basis of this presen� tation we discuss also a construction of a new hybrid optimization system � Genocop II � and present its experimental results on a few test cases �nonlinear programming problems�. Keywords � evolutionary computation � genetic algorithm � random algorithm � optimiza� tion technique � nonlinear programming � constrained optimization.
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
(Show Context)
Iterative Methods for IllConditioned Linear Systems From Optimization
, 1998
"... Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restr ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restricted form of iterative refinement. Numerical results illustrate the approaches considered. 1 Email : n.gould@rl.ac.uk 2 Current reports available from "http://www.rl.ac.uk/departments/ccd/numerical/reports/reports.html". Department for Computation and Information Atlas Centre Rutherford Appleton Laboratory Oxfordshire OX11 0QX August 26, 1998. 1 INTRODUCTION 1 1 Introduction Let A and H be, respectively, fullrank m by n (m n) and symmetric n by n real matrices. Suppose furthermore that any nonzero coefficients in this data are modest, that is the data is O(1). (1) We consider the iterative solution of the linear system (H +A T D \Gamma1 A)x = b (1.1) where b is modest an...
and
"... In this paper we discuss a construction of Genocop II, a hybrid optimization system for general nonlinear programming problems. We present the rst experimental results of the system on ve test cases. These include a variety of objective functions with nonlinear constraints. The results are encouragi ..."
Abstract
 Add to MetaCart
In this paper we discuss a construction of Genocop II, a hybrid optimization system for general nonlinear programming problems. We present the rst experimental results of the system on ve test cases. These include a variety of objective functions with nonlinear constraints. The results are encouraging. 1.
On the Accurate Determination of Search Directions for Simple Differentiate Penalty Functions
"... We present numerically reliable methods for the calculation of a search direction for use in sequential methods for solving nonlinear programming problems. The methods presented are easy to adapt to such problems as locating directions of negative curvature and linear infinite descent. Encouraging n ..."
Abstract
 Add to MetaCart
We present numerically reliable methods for the calculation of a search direction for use in sequential methods for solving nonlinear programming problems. The methods presented are easy to adapt to such problems as locating directions of negative curvature and linear infinite descent. Encouraging numerical results are included. 1.
ON THE CONVERGENCE OF A SEQUENTIAL PENALTY FUNCTION METHOD FOR CONSTRAINED MINIMIZATION*
"... Abstract. The convergence behaviour of a class of iterative methods for solving the constrained minimization problem is analysed. The methods are based on the sequential minimization of a simple ditterentiable penalty function. They are sufficiently general to ensure global convergence of the iterat ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The convergence behaviour of a class of iterative methods for solving the constrained minimization problem is analysed. The methods are based on the sequential minimization of a simple ditterentiable penalty function. They are sufficiently general to ensure global convergence of the iterates to the solution of the problem at an asymptotic (twostep Q) superlinear rate. Key words, constrained optimization, quadratic penalty function, augmented Lagrangian function, convergence analysis AMS(MOS) subject classification. 65K05 1. Introduction. We