Results 1  10
of
11
A globally convergent linearly constrained Lagrangian method for nonlinear optimization
 SIAM J. Optim
, 2002
"... Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be relia ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the wellknown software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an option to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
Superlinear Convergence of PrimalDual Interior Point Algorithms for Nonlinear Programming
, 2000
"... The local convergence properties of a class of primaldual interior point methods are analyzed. These methods are designed to minimize a nonlinear, nonconvex, objective function subject to linear equality constraints and general inequalities. They involve an inner iteration in which the logbarrier ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
The local convergence properties of a class of primaldual interior point methods are analyzed. These methods are designed to minimize a nonlinear, nonconvex, objective function subject to linear equality constraints and general inequalities. They involve an inner iteration in which the logbarrier merit function is approximately minimized subject to satisfying the linear equality constraints, and an outer iteration that species both the decrease in the barrier parameter and the level of accuracy for the inner minimization. It is shown that, asymptotically, for each value of the barrier parameter, solving a single primaldual linear system is enough to produce an iterate that already matches the barrier subproblem accuracy requirements. The asymptotic rate of convergence of the resulting algorithm is Qsuperlinear and may be chosen arbitrarily close to quadratic. Furthermore, this rate applies componentwise. These results hold in particular for the method described by Conn, Gould, Orb...
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
Optimization of Custom MOS Circuits by Transistor Sizing
 IEEE INTERNATIONAL CONFERENCE ON COMPUTERAIDED DESIGN
, 1996
"... Optimization of a circuit by transistor sizing is often a slow, tedious and iterative manual process which relies on designer intuition. Circuit simulation is carried out in the inner loop of this tuning procedure. Automating the transistor sizing process is an important step towards being able to r ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Optimization of a circuit by transistor sizing is often a slow, tedious and iterative manual process which relies on designer intuition. Circuit simulation is carried out in the inner loop of this tuning procedure. Automating the transistor sizing process is an important step towards being able to rapidly design highperformance, custom circuits. JiffyTune is a new circuit optimization tool that automates the tuning task. Delay, rise/fall time, area and power targets are accommodated. Each (weighted) target can be either a constraint or an objective function. Minimax optimization is supported. Transistors can be ratioed and similar structures grouped to ensure regular layouts. Bounds on transistor widths are supported. JiffyTune uses
LargeScale Nonlinear Constrained Optimization: A Current Survey
, 1994
"... . Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithm ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithms based upon trust regions and line searches. In addition, the importance of software, numerical linear algebra and testing will be addressed. We will try to explain why the difficulties arise, how attempts are being made to overcome them and some of the problems that still remain. Although there will be some emphasis on the LANCELOT and CUTE projects, the intention is to give a broad picture of the stateoftheart. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA 2 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 3 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England ...
A Note on Using Alternative SecondOrder Models for the Subproblems Arising in Barrier Function Methods for Minimization
, 1993
"... . Inequality constrained minimization problems are often solved by considering a sequence of parameterized barrier functions. Each barrier function is approximately minimized and the relevant parameters subsequently adjusted. It is common for the estimated solution to one barrier function problem to ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
. Inequality constrained minimization problems are often solved by considering a sequence of parameterized barrier functions. Each barrier function is approximately minimized and the relevant parameters subsequently adjusted. It is common for the estimated solution to one barrier function problem to be used as a starting estimate for the next. However, this has unfortunate repercussions for the standard Newtonlike methods applied to the barrier subproblem. In this note, we consider a class of alternative Newton methods which attempt to avoid such difficulties. Such schemes have already proved of use in the Harwell Subroutine Library quadratic programming codes VE14 and VE19. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 CERFACS, 42 Avenue Gustave Coriolis, 31057 Toulouse Cedex, France, EC Email : gould@cerfacs.fr or nimg@directory.rl.ac.uk 3 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue...
A Numerical Comparison Between the LANCELOT and MINOS packages for largescale nonlinear optimization: the complete results
, 1997
"... This report complements another paper by the same authors, "A numerical comparison between the LANCELOT and MINOS packages for largescale nonlinear optimization ". It presents the complete numerical results on which the discussion of the MINOS/LANCELOT comparison is based. It is intended mostly ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This report complements another paper by the same authors, "A numerical comparison between the LANCELOT and MINOS packages for largescale nonlinear optimization ". It presents the complete numerical results on which the discussion of the MINOS/LANCELOT comparison is based. It is intended mostly for reference. One set of tables lists the dimensions of 913 test problems from the CUTE collection, and a second set reports the performance of both packages problem by problem. 1 Introduction In Bongartz, Conn, Gould, Saunders and Toint (1997), the authors have presented a comparison between the default versions of the LANCELOT and MINOS packages on a set of 913 test problems extracted from the CUTE collection (Bongartz, Conn, Gould and Toint, 1995). That contribution describes the algorithms used by the packages and discusses statistical summaries of the results. Since we believe that complete data should be accessible to interested readers, the present companion report provides...
A globally convergent Lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds
 Math. of Computation
, 1997
"... Abstract. We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are combined in a Lagrangian barrier function. A sequence of such functions are approximately minimized within the domain defined by the simple bounds. Global convergence of the sequence of generated iterates to a firstorder stationary point for the original problem is established. Furthermore, possible numerical difficulties associated with barrier function methods are avoided as it is shown that a potentially troublesome penalty parameter is bounded away from zero. This paper is a companion to previous work of ours on augmented Lagrangian methods. 1.
On The Number Of Inner Iterations Per Outer Iteration Of A Globally Convergent Algorithm For Optimization With General Nonlinear Inequality Constraints And Simple Bounds
, 1992
"... . This paper considers the number of inner iterations required per outer iteration for the algorithm proposed by Conn et al. (1992b). We show that asymptotically, under suitable reasonable assumptions, a single inner iteration suffices. 1 IBM T.J. Watson Research Center Yorktown Heights, USA 2 Ru ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. This paper considers the number of inner iterations required per outer iteration for the algorithm proposed by Conn et al. (1992b). We show that asymptotically, under suitable reasonable assumptions, a single inner iteration suffices. 1 IBM T.J. Watson Research Center Yorktown Heights, USA 2 Rutherford Appleton Laboratory Chilton, Oxfordshire, England 3 Department of Mathematics, Facult'es Universitaires ND de la Paix, Namur, Belgium Keywords : Nonlinear optimization, inequality constraints, barrier methods, complexity. 1 Introduction In this paper, we consider the nonlinear programming problem minimize x2! n f(x) (1:1) subject to the general constraints c i (x) 0; i = 1; : : : ; m; (1:2) and the specific simple bounds l x u: (1:3) We assume that the region B = fx 2 ! n j l x ug is nonempty and may be infinite. We do not rule out the possibility that further simple bounds on the variables are included amongst the general constraints (1.2) if that is deemed appropr...