Results 21  30
of
106
More AD of Nonlinear AMPL Models: Computing Hessian Information and Exploiting Partial Separability
 in Computational Differentiation: Applications, Techniques, and
, 1996
"... We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. ..."
Abstract

Cited by 20 (12 self)
 Add to MetaCart
(Show Context)
We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. The nonlinear expressions may be interpreted or, to gain some evaluation speed at the cost of increased preparation time, converted to Fortran or C. We have extended the interpretive scheme to evaluate Hessian (of Lagrangian) times vector. Detecting partially separable structure (sums of terms, each depending, perhaps after a linear transformation, on only a few variables) is of independent interest, as some solvers exploit this structure. It can be detected automatically by suitable "tree walks". Exploiting this structure permits an AD computation of the entire Hessian matrix by accumulating Hessian times vector computations for each term, and can lead to a much faster computation...
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
InexactRestoration Algorithm for Constrained Optimization
 Journal of Optimization Theory and Applications
, 1999
"... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and largescale implementations of the resulting method are possible. We prove that under suitable conditions, that ...
Degenerate Nonlinear Programming with a Quadratic Growth Condition
 Preprint ANL/MCSP7610699, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
"... . We show that the quadratic growth condition and the MangasarianFromovitz constraint qualification imply that local minima of nonlinear programs are isolated stationary points. As a result, when started sufficiently close to such points, an L1 exact penalty sequential quadratic programming algorit ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
. We show that the quadratic growth condition and the MangasarianFromovitz constraint qualification imply that local minima of nonlinear programs are isolated stationary points. As a result, when started sufficiently close to such points, an L1 exact penalty sequential quadratic programming algorithm will induce at least Rlinear convergence of the iterates to such a local minimum. We construct an example of a degenerate nonlinear program with a unique local minimum satisfying the quadratic growth and the MangasarianFromovitz constraint qualification but for which no positive semidefinite augmented Lagrangian exists. We present numerical results obtained using several nonlinear programming packages on this example, and discuss its implications for some algorithms. 1. Introduction. Recently, there has been renewed interest in analyzing and modifying sequential quadratic programming (SQP) algorithms for constrained nonlinear optimization for cases where the traditional regularity cond...
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
The gas transmission problem solved by an extension of the simplex algorithm
 Management Science
, 2000
"... The problem of distributing gas through a network of pipelines is formulated as a cost minimization subject to nonlinear flowpressure relations, material balances and pressure bounds. The solution method is based on piecewise linear approximations of the nonlinear flowpressure relations. The appro ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
The problem of distributing gas through a network of pipelines is formulated as a cost minimization subject to nonlinear flowpressure relations, material balances and pressure bounds. The solution method is based on piecewise linear approximations of the nonlinear flowpressure relations. The approximated problem is solved by an extension of the Simplex method. The solution method is tested on real world data and compared with alternative solution methods. 1
A New Hessian Preconditioning Method Applied to Variational Data Assimilation Experiments Using NASA General Circulation Models
, 1996
"... An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an alg ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an algorithm that reduces errors in Hessian estimation induced by lack of validity of the tangent linear approximation. The new preconditioning method was numerically tested in the framework of variational data assimilation expeximents using both the National Aeronautics and Space Administration (NASA) semiLagrangian semiimplicit global shallowwater equations model and the adiabatic version of the NASA/Data AssimilatiOn Office (DAO) Goddard Observing System Version I (GEOS1) general circulation model. The authors' results show that the new preconditioning method speeds up convergence rate of minimization when applied to variational data assimilation cases characterized by strong nonlinearity.
Noise Considerations in Circuit Optimization
 In Proc. International Conference on ComputerAided Design
, 1998
"... Noise can cause digital circuits to switch incorrectly and thus produce spurious results. Noise can also have adverse power, timing and reliability e ects. Dynamic logic is particularly susceptible to chargesharing and coupling noise. Thus the design and optimization of a circuit should take noise ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Noise can cause digital circuits to switch incorrectly and thus produce spurious results. Noise can also have adverse power, timing and reliability e ects. Dynamic logic is particularly susceptible to chargesharing and coupling noise. Thus the design and optimization of a circuit should take noise considerations into account. Such considerations are typically stated as semiin nite constraints. In addition, the number of signals to be checked and the number of subintervals of time during which the checking must be performed can potentially be very large. Thus, the practical incorporation of noise constraints during circuit optimization is a hitherto unsolved problem. This paper describes a novel method for incorporating noise considerations during automatic circuit optimization. Semiin nite constraints representing noise considerations are rst converted toordinary equality constraints involving time integrals, which are readily computed in the context of circuit optimization based on timedomain simulation. Next, the gradients of these integrals are computed by the adjoint method. By using an augmented Lagrangian optimization merit function, the adjoint method is applied tocompute all the necessary gradients required for optimization in a single adjoint analysis, no matter how many noise measurements are considered and irrespective of the dimensionality of the problem. Numerical results are presented. 1
Augmented Lagrangian algorithms based on the spectral projected gradient method for solving nonlinear programming problems
"... The Spectral Projected Gradient method (SPG) is an algorithm for largescale boundconstrained optimization introduced recently by Birgin, Martnez and Raydan. It is based on Raydan's unconstrained generalization of the BarzilaiBorwein method for quadratics. The SPG algorithm turned out to be s ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
The Spectral Projected Gradient method (SPG) is an algorithm for largescale boundconstrained optimization introduced recently by Birgin, Martnez and Raydan. It is based on Raydan's unconstrained generalization of the BarzilaiBorwein method for quadratics. The SPG algorithm turned out to be surprisingly eective for solving many largescale minimization problems with box constraints. Therefore, it is natural to test its performance for solving the subproblems that appear in nonlinear programming methods based on augmented Lagrangians. In this work, augmented Lagrangian methods which use SPG as underlying convexconstraint solver are introduced (ALSPG), and the methods are tested in two sets of problems. First, a meaningful subset of largescale nonlinearly constrained problems of the CUTE collection is solved and compared with the performance of LANCELOT. Second, a family of location problems in the minimax formulation is solved against the package FFSQP.
GALAHAD, a library of threadsafe Fortran 90 Packages for LargeScale Nonlinear Optimization
, 2002
"... In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for prepro ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for preprocessing such problems prior to solution. It also contains an updated version of the venerable nonlinear programming package, LANCELOT.