Results 11  20
of
65
Interiorpoint algorithms, penalty methods and equilibrium problems
, 2003
"... Abstract. In this paper we consider the question of solving equilibrium problems—formulated as complementarity problems and, more generally, mathematical programs with equilibrium constraints (MPECs)—as nonlinear programs, using an interiorpoint approach. These problems pose theoretical difficultie ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Abstract. In this paper we consider the question of solving equilibrium problems—formulated as complementarity problems and, more generally, mathematical programs with equilibrium constraints (MPECs)—as nonlinear programs, using an interiorpoint approach. These problems pose theoretical difficulties for nonlinear solvers, including interiorpoint methods. We examine the use of penalty methods to get around these difficulties and provide substantial numerical results. We go on to show that penalty methods can resolve some problems that interiorpoint algorithms encounter in general. 1.
Advances in Simultaneous Strategies for Dynamic Process Optimization
 Optimization, Chemical Engineering Science
, 2001
"... Introduction Over the past decade, applications in dynamic simulation have increased signicantly in the process industries. These are driven by strong competitive markets faced by operating companies along with tighter specications on process performance and regulatory limits. Moreover, the develop ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Introduction Over the past decade, applications in dynamic simulation have increased signicantly in the process industries. These are driven by strong competitive markets faced by operating companies along with tighter specications on process performance and regulatory limits. Moreover, the developmentofpowerful commercial modeling tools for dynamic simulation, such as ASPEN Custom # ####### ########################## #### ############### ################### 1 Modeler and gProms, has led to their introduction in industry alongside their widely used steady state counterparts. Dynamic optimization is the natural extension of these dynamic simulation tools because it automates many of the decisions required for engineering studies. Applications of dynamic simulation can be classied into oline and online tasks. Oline tasks include: # Design to avoid undesirable transients for chemical process
Feasible Interior Methods Using Slacks for Nonlinear Optimization
 Computational Optimization and Applications
, 2002
"... A slackbased feasible interior point method is described which can be derived as a modification of infeasible methods. The modification is minor for most line search methods, but trust region methods require special attention. It is shown how the Cauchy point, which is often computed in trust regio ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
A slackbased feasible interior point method is described which can be derived as a modification of infeasible methods. The modification is minor for most line search methods, but trust region methods require special attention. It is shown how the Cauchy point, which is often computed in trust region methods, must be modified so that the feasible method is effective for problems containing both equality and inequality constraints. The relationship between slackbased methods and traditional feasible methods is discussed. Numerical results showing the relative performance of feasible versus infeasible interior point methods are presented.
Mathematical Programs with Equilibrium Constraints: Automatic Reformulation and Solution via Constrained Optimization
, 2002
"... Constrained optimization has been extensively used to... This paper briefly reviews some methods available to solve these problems and describes a new suite of tools for working with MPEC models. Computational results demonstrating... ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Constrained optimization has been extensively used to... This paper briefly reviews some methods available to solve these problems and describes a new suite of tools for working with MPEC models. Computational results demonstrating...
Global Optimization For Constrained Nonlinear Programming
, 2001
"... In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM dn ) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM dn, leading to the firstorder necessary and sufficient condition for CLM dn. To find
A Comparison of Optimization Software for Mesh ShapeQuality Improvement Problems
, 2002
"... Simplicial mesh shapequality can be improved by optimizing an objective function based on tetrahedral shape measures. If the objective function is formulated in terms of all elements in a given mesh rather than a local patch, one is confronted with a largescale, nonlinear, constrained numerical op ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
Simplicial mesh shapequality can be improved by optimizing an objective function based on tetrahedral shape measures. If the objective function is formulated in terms of all elements in a given mesh rather than a local patch, one is confronted with a largescale, nonlinear, constrained numerical optimization problem. We investigate the use of six generalpurpose stateoftheart solvers and two customdeveloped methods to solve the resulting largescale problem. The performance of each method is evaluated in terms of robustness, time to solution, convergence properties, and sealability on several two and threedimensional test cases.
GALAHAD, a library of threadsafe Fortran 90 Packages for LargeScale Nonlinear Optimization
, 2002
"... In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for prepro ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this paper, we describe the design of version 1.0 of GALAHAD, a library of Fortran 90 packages for largescale largescale nonlinear optimization. The library particularly addresses quadratic programming problems, containing both interior point and active set variants, as well as tools for preprocessing such problems prior to solution. It also contains an updated version of the venerable nonlinear programming package, LANCELOT.
On the convergence of the Newton/logbarrier method
 Preprint ANL/MCSP681 0897, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newt ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton's method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood of the minimizer. By partitioning according to the subspace of active constraint gradients, however, we show that this neighborhood is actually quite large, thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/logbarrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the convergence criterion for each Newton process. 1.
Steering Exact Penalty Methods for Nonlinear Programming
, 2007
"... This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. I ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential quadratic programming (SQP) and sequential linearquadratic programming (SLQP) methods that use trust regions to promote convergence. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods.
MA57  A new code for the solution of sparse Symmetric Definite And indefinite Systems
, 2002
"... We introduce a new code for the direct solution of sparse symmetric linear equations that solves indefinite systems with 2 × 2 pivoting for stability. This code, called MA57, is in HSL 2002 and supersedes the well used HSL code MA27. We describe the user interface in some detail and emphasize some o ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We introduce a new code for the direct solution of sparse symmetric linear equations that solves indefinite systems with 2 × 2 pivoting for stability. This code, called MA57, is in HSL 2002 and supersedes the well used HSL code MA27. We describe the user interface in some detail and emphasize some of the novel features of MA57. These include restart facilities, matrix modification, partial solution for matrix factors, solution of multiple righthand sides, and iterative refinement and error analysis. There are additional facilities within a Fortran 90 implementation that include the ability to identify and change pivots. Several of these facilities have been developed particularly to support optimization applications and the performance of the code on problems arising therefrom will be presented.