Results 1  10
of
32
A Trust Region Framework For Managing The Use Of Approximation Models In Optimization
 STRUCTURAL OPTIMIZATION
, 1998
"... This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary ini ..."
Abstract

Cited by 131 (10 self)
 Add to MetaCart
This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary initial iterate, will converge to a stationary point or local optimizer for the original problem. The approach we present is based on the trust region idea from nonlinear programming and is shown to be provably convergent to a solution of the original highfidelity problem. The proposed method for managing approximations in engineering optimization suggests ways to decide when the fidelity, and thus the cost, of the approximations might be fruitfully increased or decreased in the course of the optimization iterations. The approach is quite general. We make no assumptions on the structure of the original problem, in particular, no assumptions of convexity and separability, and place only mild ...
TrustRegion InteriorPoint Algorithms For Minimization Problems With Simple Bounds
 SIAM J. CONTROL AND OPTIMIZATION
, 1995
"... Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are c ..."
Abstract

Cited by 56 (18 self)
 Add to MetaCart
Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are consistently scaled. The second algorithm proposed here uses an unscaled trust region. A global convergence result for these algorithms is given and dogleg and conjugategradient algorithms to compute trial steps are introduced. Some numerical examples that show the advantages of the second algorithm are presented.
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
TrustRegion Proper Orthogonal Decomposition for Flow Control
 Institute for Computer
, 2000
"... . The proper orthogonal decomposition (POD) is a model reduction technique for the simulation of physical processes governed by partial di#erential equations, e.g. fluid flows. It can also be used to develop reduced order control models. The essential is the computation of POD basis functions that r ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
. The proper orthogonal decomposition (POD) is a model reduction technique for the simulation of physical processes governed by partial di#erential equations, e.g. fluid flows. It can also be used to develop reduced order control models. The essential is the computation of POD basis functions that represent the influence of the control action on the system in order to get a suitable control model. We present an approach where the suitable reduced order model is derived successively and give global convergence results. Keywords: proper orthogonal decomposition, flow control, reduced order modeling, trust region methods, global convergence 1. Introduction. We present a robust reduced order method for the control of complex timedependent physical processes governed by partial di#erential equations (PDE). Such a control problem often is hard to solve because of the high order system that describes the state (a large number of (finite element) basis elements for every point in the time d...
SurrogateAssisted Evolutionary Optimization Frameworks for HighFidelity Engineering Design Problems
 In Knowledge Incorporation in Evolutionary Computation
, 2004
"... Over the last decade, Evolutionary Algorithms (EAs) have emerged as a powerful paradigm for global optimization of multimodal functions. More recently, there has been significant interest in applying EAs to engineering design problems. However, in many complex engineering design problems where high ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
(Show Context)
Over the last decade, Evolutionary Algorithms (EAs) have emerged as a powerful paradigm for global optimization of multimodal functions. More recently, there has been significant interest in applying EAs to engineering design problems. However, in many complex engineering design problems where highfidelity analysis models are used, each function evaluation may require a Computational Structural Mechanics (CSM), Computational Fluid Dynamics (CFD) or Computational ElectroMagnetics (CEM) simulation costing minutes to hours of supercomputer time. Since EAs typically require thousands of function evaluations to locate a near optimal solution, the use of EAs often becomes computationally prohibitive for this class of problems. In this paper, we present frameworks that employ surrogate models for solving computationally expensive optimization problems on a limited computational budget. In particular, the key factors responsible for the success of these frameworks are discussed. Experimental results obtained on benchmark test functions and realworld complex design problems are presented.
Analysis of Inexact TrustRegion SQP Algorithms
 RICE UNIVERSITY, DEPARTMENT OF
, 2000
"... In this paper we extend the design of a class of compositestep trustregion SQP methods and their global convergence analysis to allow inexact problem information. The inexact problem information can result from iterative linear systems solves within the trustregion SQP method or from approximatio ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
In this paper we extend the design of a class of compositestep trustregion SQP methods and their global convergence analysis to allow inexact problem information. The inexact problem information can result from iterative linear systems solves within the trustregion SQP method or from approximations of firstorder derivatives. Accuracy requirements in our trustregion SQP methods are adjusted based on feasibility and optimality of the iterates. Our accuracy requirements are stated in general terms, but we show how they can be enforced using information that is already available in matrixfree implementations of SQP methods. In the absence of inexactness our global convergence theory is equal to that of Dennis, ElAlem, Maciel (SIAM J. Optim., 7 (1997), pp. 177207). If all iterates are feasible, i.e., if all iterates satisfy the equality constraints, then our results are related to the known convergence analyses for trustregion methods with inexact gradient information fo...
Superlinear Convergence And Implicit Filtering
, 1999
"... . In this note we show how the implicit filtering algorithm can be coupled with the BFGS quasiNewton update to obtain a superlinearly convergent iteration if the noise in the objective function decays sufficiently rapidly as the optimal point is approached. We show how known theory for the noisefr ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
. In this note we show how the implicit filtering algorithm can be coupled with the BFGS quasiNewton update to obtain a superlinearly convergent iteration if the noise in the objective function decays sufficiently rapidly as the optimal point is approached. We show how known theory for the noisefree case can be extended and thereby provide a partial explanation for the good performance of quasiNewton methods when coupled with implicit filtering. Key words. noisy optimization, implicit filtering, BFGS algorithm, superlinear convergence AMS subject classifications. 65K05, 65K10, 90C30 1. Introduction. In this paper we examine the local and global convergence behavior of the combination of the BFGS [4], [20], [17], [23] quasiNewton method with the implicit filtering algorithm. The resulting method is intended to minimize smooth functions that are perturbed with lowamplitude noise. Our results, which extend those of [5], [15], and [6], show that if the amplitude of the noise decays ...
Combining Trust Region and Line Search Techniques
, 1998
"... We propose an algorithm for nonlinear optimization that employs both trust region techniques and line searches. Unlike traditional trust region methods, our algorithm does not resolve the subproblem if the trial step results in an increase in the objective function, but instead performs a backtracki ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
We propose an algorithm for nonlinear optimization that employs both trust region techniques and line searches. Unlike traditional trust region methods, our algorithm does not resolve the subproblem if the trial step results in an increase in the objective function, but instead performs a backtracking line search from the failed point. Backtracking can be done along a straight line or along a curved path. We showthat the new algorithm preserves the strong convergence properties of trust region methods. Numerical results are also presented.
On the Convergence Theory of TrustRegionBased Algorithms for EqualityConstrained Optimization
, 1995
"... In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
In this paper we analyze incxact trust region interior point (TRIP) sequential quadr tic programming (SOP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonhnear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of hncarizcd equations is expensive. Often, the solution of hncar systems and derivatives arc computed incxactly yielding nonzero residuals. This paper
Method of Moments Using Monte Carlo Simulation
 Journal of Computational and Graphical Statistics
, 1995
"... We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the loglikelihood. We present a simple implementation using ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the loglikelihood. We present a simple implementation using the NewtonRaphson algorithm, with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to leastsquares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow "moments" that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by spe...