Results 21  30
of
82
InexactRestoration Algorithm for Constrained Optimization
 Journal of Optimization Theory and Applications
, 1999
"... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and largescale implementations of the resulting method are possible. We prove that under suitable conditions, that ...
Parametric yield maximization using gate sizing based on efficient statistical power and delay gradient computation
 In ICCAD
, 2005
"... With the increased significance of leakage power and performance variability, the yield of a design is becoming constrained both by power and performance limits, thereby significantly complicating circuit optimization. In this paper, we propose a new optimization method for yield optimization under ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
With the increased significance of leakage power and performance variability, the yield of a design is becoming constrained both by power and performance limits, thereby significantly complicating circuit optimization. In this paper, we propose a new optimization method for yield optimization under simultaneous leakage power and performance limits. The optimization approach uses a novel leakage power and performance analysis that is statistical in nature and considers the correlation between leakage power and performance to enable accurate computation of circuit yield under power and delay limits. We then propose a new heuristic approach to incrementally compute the gradient of yield with respect to gate sizes in the circuit with high efficiency and accuracy. We then show how this gradient information can be effectively used by a nonlinear optimizer to perform yield optimization. We consider both interdie and intradie variations with correlated and random components. The proposed approach is implemented and tested and we demonstrate up to 40 % yield improvement compared to a deterministically optimized circuit. 1.
Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints
 SIAM Journal on Optimization
, 1996
"... We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an a ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We consider the global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems. In these methods, linear and more general constraints are handled in different ways. The general constraints are combined with the objective function in an augmented Lagrangian. The iteration consists of solving a sequence of subproblems; in each subproblem the augmented Lagrangian is approximately minimized in the region defined by the linear constraints. A subproblem is terminated as soon as a stopping condition is satisfied. The stopping rules that we consider here encompass practical tests used in several existing packages for linearly constrained optimization. Our algorithm also allows different penalty parameters to be associated with disjoint subsets of the general constraints. In this paper, we analyze the convergence of the sequence of iterates generated by such an algorithm and prove global and fast linear convergence as well as showin...
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
More AD of Nonlinear AMPL Models: Computing Hessian Information and Exploiting Partial Separability
 in Computational Differentiation: Applications, Techniques, and
, 1996
"... We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. The nonlinear expressions may be interpreted or, to gain some evaluation speed at the cost of increased preparation time, converted to Fortran or C. We have extended the interpretive scheme to evaluate Hessian (of Lagrangian) times vector. Detecting partially separable structure (sums of terms, each depending, perhaps after a linear transformation, on only a few variables) is of independent interest, as some solvers exploit this structure. It can be detected automatically by suitable "tree walks". Exploiting this structure permits an AD computation of the entire Hessian matrix by accumulating Hessian times vector computations for each term, and can lead to a much faster computation...
Noise Considerations in Circuit Optimization
 In Proc. International Conference on ComputerAided Design
, 1998
"... Noise can cause digital circuits to switch incorrectly and thus produce spurious results. Noise can also have adverse power, timing and reliability e ects. Dynamic logic is particularly susceptible to chargesharing and coupling noise. Thus the design and optimization of a circuit should take noise ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Noise can cause digital circuits to switch incorrectly and thus produce spurious results. Noise can also have adverse power, timing and reliability e ects. Dynamic logic is particularly susceptible to chargesharing and coupling noise. Thus the design and optimization of a circuit should take noise considerations into account. Such considerations are typically stated as semiin nite constraints. In addition, the number of signals to be checked and the number of subintervals of time during which the checking must be performed can potentially be very large. Thus, the practical incorporation of noise constraints during circuit optimization is a hitherto unsolved problem. This paper describes a novel method for incorporating noise considerations during automatic circuit optimization. Semiin nite constraints representing noise considerations are rst converted toordinary equality constraints involving time integrals, which are readily computed in the context of circuit optimization based on timedomain simulation. Next, the gradients of these integrals are computed by the adjoint method. By using an augmented Lagrangian optimization merit function, the adjoint method is applied tocompute all the necessary gradients required for optimization in a single adjoint analysis, no matter how many noise measurements are considered and irrespective of the dimensionality of the problem. Numerical results are presented. 1
Augmented Lagrangian algorithms based on the spectral projected gradient method for solving nonlinear programming problems
"... The Spectral Projected Gradient method (SPG) is an algorithm for largescale boundconstrained optimization introduced recently by Birgin, Martnez and Raydan. It is based on Raydan's unconstrained generalization of the BarzilaiBorwein method for quadratics. The SPG algorithm turned out to be surpri ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
The Spectral Projected Gradient method (SPG) is an algorithm for largescale boundconstrained optimization introduced recently by Birgin, Martnez and Raydan. It is based on Raydan's unconstrained generalization of the BarzilaiBorwein method for quadratics. The SPG algorithm turned out to be surprisingly eective for solving many largescale minimization problems with box constraints. Therefore, it is natural to test its performance for solving the subproblems that appear in nonlinear programming methods based on augmented Lagrangians. In this work, augmented Lagrangian methods which use SPG as underlying convexconstraint solver are introduced (ALSPG), and the methods are tested in two sets of problems. First, a meaningful subset of largescale nonlinearly constrained problems of the CUTE collection is solved and compared with the performance of LANCELOT. Second, a family of location problems in the minimax formulation is solved against the package FFSQP.
A heuristic for optimizing stochastic activity networks with applications to statistical digital circuit sizing
 IEEE Transactions on Circuits and SystemsI
, 2004
"... A deterministic activity network (DAN) is a collection of activities, each with some duration, along with a set of precedence constraints, which specify that activities begin only when certain others have finished. One critical performance measure for an activity network is its makespan, which is th ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
A deterministic activity network (DAN) is a collection of activities, each with some duration, along with a set of precedence constraints, which specify that activities begin only when certain others have finished. One critical performance measure for an activity network is its makespan, which is the minimum time required to complete all activities. In a stochastic activity network (SAN), the durations of the activities and the makespan are random variables. The analysis of SANs is quite involved, but can be carried out numerically by Monte Carlo analysis. This paper concerns the optimization of a SAN, i.e., the choice of some design variables that affect the probability distributions of the activity durations. We concentrate on the problem of minimizing a quantile (e.g., 95%) of the makespan, subject to constraints on the variables. This problem has many applications, ranging from project management to digital integrated circuit (IC) sizing (the latter being our motivation). While there are effective methods for optimizing DANs, the SAN optimization problem is much more difficult; the few existing methods cannot handle largescale problems.
GALAHAD, a library of threadsafe Fortran 90 Packages for LargeScale Nonlinear Optimization
, 2002
"... In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for prepro ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for preprocessing such problems prior to solution. It also contains an updated version of the venerable nonlinear programming package, LANCELOT.
A New Hessian Preconditioning Method Applied to Variational Data Assimilation Experiments Using NASA General Circulation Models
, 1996
"... An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an algorith ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
An analysis is provided to show that Courtier's et al. method for estimating the Hessian preconditioning is not applicable to important categories of cases involving nonlinearity. An extension of the method to cases with higher nonlinearity is proposed in the present paper by designing an algorithm that reduces errors in Hessian estimation induced by lack of validity of the tangent linear approximation. The new preconditioning method was numerically tested in the framework of variational data assimilation expeximents using both the National Aeronautics and Space Administration (NASA) semiLagrangian semiimplicit global shallowwater equations model and the adiabatic version of the NASA/Data AssimilatiOn Office (DAO) Goddard Observing System Version I (GEOS1) general circulation model. The authors' results show that the new preconditioning method speeds up convergence rate of minimization when applied to variational data assimilation cases characterized by strong nonlinearity.