Results 1  10
of
167
A rigorous framework for optimization of expensive functions by surrogates
 STRUCTURAL OPTIMIZATION
, 1999
"... The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which direct application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approxi ..."
Abstract

Cited by 200 (16 self)
 Add to MetaCart
The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which direct application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31variable helicopter rotor blade design example and for a standard optimization test example.
HighOrder Collocation Methods for Differential Equations with Random Inputs
 SIAM Journal on Scientific Computing
"... Abstract. Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling met ..."
Abstract

Cited by 180 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a highorder stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods. Key words. collocation methods, stochastic inputs, differential equations, uncertainty quantification
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Latin Hypercube Sampling and the propagation of uncertainty in analyses of complex systems,” Reliability Engineering and System Safety
, 2003
"... ..."
The Empirical Behavior of Sampling Methods for Stochastic Programming
 Annals of Operations Research
, 2002
"... We investigate the quality of solutions obtained from sampleaverage approximations to twostage stochastic linear programs with recourse. We use a recently developed software tool executing on a computational grid to solve many large instances of these problems, allowing us to obtain highquality s ..."
Abstract

Cited by 111 (18 self)
 Add to MetaCart
(Show Context)
We investigate the quality of solutions obtained from sampleaverage approximations to twostage stochastic linear programs with recourse. We use a recently developed software tool executing on a computational grid to solve many large instances of these problems, allowing us to obtain highquality solutions and to verify optimality and nearoptimality of the computed solutions in various ways.
Valuation of Mortgage Backed Securities Using Brownian Bridges to Reduce Effective Dimension
, 1997
"... The quasiMonte Carlo method for financial valuation and other integration problems has error bounds of size O((log N) k N \Gamma1 ), or even O((log N) k N \Gamma3=2 ), which suggests significantly better performance than the error size O(N \Gamma1=2 ) for standard Monte Carlo. But in hig ..."
Abstract

Cited by 100 (15 self)
 Add to MetaCart
The quasiMonte Carlo method for financial valuation and other integration problems has error bounds of size O((log N) k N \Gamma1 ), or even O((log N) k N \Gamma3=2 ), which suggests significantly better performance than the error size O(N \Gamma1=2 ) for standard Monte Carlo. But in high dimensional problems this benefit might not appear at feasible sample sizes. Substantial improvements from quasiMonte Carlo integration have, however, been reported for problems such as the valuation of mortgagebacked securities, in dimensions as high as 360. We believe that this is due to a lower effective dimension of the integrand in those cases. This paper defines the effective dimension and shows in examples how the effective dimension may be reduced by using a Brownian bridge representation. 1 Introduction Simulation is often the only effective numerical method for the accurate valuation of securities whose value depends on the whole trajectory of interest Mathematics Departmen...
A review of techniques for parameter sensitivity analysis of environmental models
 ENVIRONMENTAL MONITORING AND ASSESSMENT
, 1994
"... Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a 'sensitivity analysis'. A comprehensive review is presented o ..."
Abstract

Cited by 91 (1 self)
 Add to MetaCart
(Show Context)
Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a 'sensitivity analysis'. A comprehensive review is presented of more than a dozen sensitivity analysis methods. This review is intended for those not intimately familiar with statistics or the techniques utilized for sensitivity analysis of computer models. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values oneatatime. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.
Latin Supercube Sampling for Very High Dimensional Simulations
, 1997
"... This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables ..."
Abstract

Cited by 84 (8 self)
 Add to MetaCart
This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables are grouped into subsets, and a lower dimensional QMC method is used within each subset. The QMC points are presented in random order within subsets. QMC methods have been observed to lose effectiveness in high dimensional problems. This paper shows that LSS can extend the benefits of QMC to much higher dimensions, when one can make a good grouping of input variables. Some suggestions for grouping variables are given for the motivating examples. Even a poor grouping can still be expected to do as well as LHS. The paper also extends LHS and LSS to infinite dimensional problems. The paper includes a survey of QMC methods, randomized versions of them (RQMC) and previous methods for extending Q...
Fast Numerical Methods for Stochastic Computations: A Review
, 2009
"... This paper presents a review of the current stateoftheart of numerical methods for stochastic computations. The focus is on efficient highorder methods suitable for practical applications, with a particular emphasis on those based on generalized polynomial chaos (gPC) methodology. The framework ..."
Abstract

Cited by 65 (2 self)
 Add to MetaCart
This paper presents a review of the current stateoftheart of numerical methods for stochastic computations. The focus is on efficient highorder methods suitable for practical applications, with a particular emphasis on those based on generalized polynomial chaos (gPC) methodology. The framework of gPC is reviewed, along with its Galerkin and collocation approaches for solving stochastic equations. Properties of these methods are summarized by using results from literature. This paper also attempts to present the gPC based methods in a unified framework based on an extension of the classical spectral methods into multidimensional random spaces.
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
 SIAM Journal on Optimization
, 2004
"... A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for gene ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
(Show Context)
A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for general nonlinear constraints. In generalizing existing algorithms, new theoretical convergence results are presented that reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are required to apply the algorithm, a hierarchy of theoretical convergence results based on the Clarke calculus is given, in which local smoothness dictate what can be proved about certain limit points generated by the algorithm. To demonstrate the usefulness of the algorithm, the algorithm is applied to the design of a loadbearing thermal insulation system. We believe this is the first algorithm with provable convergence results to directly target this class of problems.