Results 1  10
of
20
Probabilistic Sensitivity Analysis of Complex Models: A Bayesian Approach
 Journal of the Royal Statistical Society, Series B
, 2002
"... this paper, we use the weak form of this prior, p( ; . This implies an in nite prior variance of (x), whereas in practice we expect there to be cases when the model developer can provide some proper prior knowledge about the function (:). We would not expect them to propose values for a, d, ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
this paper, we use the weak form of this prior, p( ; . This implies an in nite prior variance of (x), whereas in practice we expect there to be cases when the model developer can provide some proper prior knowledge about the function (:). We would not expect them to propose values for a, d, z and V in (14) directly, but suitable values can be found by asking the developer to estimate various percentiles of (x), and then nding a, d, z and V such that the implied percentiles through the Gaussian process model are similar. This process is described in detail in Oakley (2002)
Numerical optimization using computer experiments
 Institute for Computer
, 1997
"... Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivativefree methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivativefree methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
Bayesian Inference for the Uncertainty Distribution
, 2000
"... this paper we are interested in the case where the computer model is computationally expensive, to the extent that the Monte Carlo approach is not practical. This is because the sample of outputs will usually need to be large, to be certain of obtaining accurate inferences about the distribution of ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
this paper we are interested in the case where the computer model is computationally expensive, to the extent that the Monte Carlo approach is not practical. This is because the sample of outputs will usually need to be large, to be certain of obtaining accurate inferences about the distribution of Y . Thus we need to find a way of learning about the uncertainty distribution without having to run the algorithm a large number of times. We consider a Bayesian approach, which uses the information from each single evaluation to learn about the algorithm as a whole, and so reduces the total number of evaluations needed. One immediate question is whether deriving the distribution of Y should be of interest when the model is unlikely to predict reality correctly. Firstly we note that even a good model can be rendered ineffective by an unknown input, if the resulting uncertainty in Y is high. In general our goal is simply to quantify the information that is lost by not knowing the exact value of an input in a model. A decision to invest more resources in learning the true value of an input could follow from an uncertainty analysis. However, we have made the simplification here that the user of the model would only evaluate the model at X given the value of X. Even if X is known, the user may choose to run the model at a range of inputs, dependent on X,
Applicationdriven sequential designs for simulation experiments: Kriging metamodeling
 Journal of the Operational Research Society
, 2004
"... This paper proposes a novel method to select an experimental design for interpolation in simulation. Although the paper focuses on Kriging in deterministic simulation, the method also applies to other types of metamodels (besides Kriging), and to stochastic simulation. The paper focuses on simulatio ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
This paper proposes a novel method to select an experimental design for interpolation in simulation. Although the paper focuses on Kriging in deterministic simulation, the method also applies to other types of metamodels (besides Kriging), and to stochastic simulation. The paper focuses on simulations that require much computer time, so it is important to select a design with a small number of observations. The proposed method is therefore sequential. The novelty of the method is that it accounts for the specific input/output function of the particular simulation model at hand; that is, the method is applicationdriven or customized. This customization is achieved through crossvalidation and jackknifing. The new method is tested through two academic applications, which demonstrate that the method indeed gives better results than either sequential designs based on an approximate Kriging prediction variance formula or designs with prefixed sample sizes.
Bayesian analysis of computer code outputs
 QUANTITATIVE METHODS FOR CURRENT ENVIRONMENTAL ISSUES
, 2002
"... realworld phenomena. They are typically used to predict the corresponding realworld phenomenon, as in the following examples. Modern weather forecasting is done using enormously complex models of the atmosphere (and its interactions with land and sea). The primary intention is to predict future w ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
realworld phenomena. They are typically used to predict the corresponding realworld phenomenon, as in the following examples. Modern weather forecasting is done using enormously complex models of the atmosphere (and its interactions with land and sea). The primary intention is to predict future weather, given information about current conditions. Manufacturers of motor car engines build models to predict their behaviour. They are used to explore possible variations in engine design, and thereby to avoid the time and expense of actually building many unsuccessful variants in the search for an improved design. Water engineers build network ‡ow models of sewer systems, in order to predict where problems of surcharging and ‡ooding will arise under rainstorm conditions. They are then used to explore changes to the network to solve those problems. Models of atmospheric dispersion are used to predict the spread and deposition
Design of a LowBoom Supersonic Business Jet Using Cokriging Approximation Models
 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, AIAA
, 2002
"... this paper we study the ability of the Cokriging method to represent functions with multiple local minima and sharp discontinuities for use in the multidimensional design of a lowboom supersonic business jet wingbodycanard configuration. Cokriging approximation models are an extension of the o ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
this paper we study the ability of the Cokriging method to represent functions with multiple local minima and sharp discontinuities for use in the multidimensional design of a lowboom supersonic business jet wingbodycanard configuration. Cokriging approximation models are an extension of the original Kriging method which incorporate secondary information such as the values of the gradients of the function being approximated. Provided that gradient information is available through inexpensive algorithms such as the adjoint method, this approach greatly improves on the accuracy and e#ciency of the original Kriging method for highdimensional design problems. In order to construct Cokriging approximation models, an automated Euler and NavierStokes based method, QSP107, has been developed to provide accurate performance and boom data with very rapid turnaround. The resulting approximations are used with a simple gradientbased optimizer to improve a multiobjective cost function with large variations in the design space. Results of sample twodimensional test problems, together with a 15dimensional test case are presented and discussed. The Cokriging method is a viable alternative to quadratic response surface methods for preliminary design using a moderate number of design variables, particularly when the cost function being optimized is very nonlinear
Alonso ”Multiobjective Optimization using Approximation ModelBased Genetic Algorithms
 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, AIAA
, 2004
"... Realistic highdimensional MDO problems are more likely to have multimodal search spaces and they are also mutiobjective in nature. Genetic Alogrithms(GAs) are becoming popular choices for better global and multiobjective optimization frameworks to fully realize the full benefits of conducting MDO. ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Realistic highdimensional MDO problems are more likely to have multimodal search spaces and they are also mutiobjective in nature. Genetic Alogrithms(GAs) are becoming popular choices for better global and multiobjective optimization frameworks to fully realize the full benefits of conducting MDO. One of the biggest drawbacks of GAs, however, is that they require many function evaluations to achieve a reasonable improvement within the design space. Therefore, the efficiency of GAs has to be improved in some way before they can be truly used in highfidelity MDO. In this work, a multiobjective design optimization framework is developed by combining GAs and an approximation technique called Kriging method which can produce fairly accurate global approximations to the actual design space to provide the function evaluations efficiently. It is applied to a low boom supersonic business jet design problem and its results demonstrate the efficiency and applicability of the proposed design framework. Furthermore, the possibility of using the Kriging approximation models as computationally inexpensive gradient estimators to accelerate the GA process is investigated. 1.
Parameter estimation for computationally intensive nonlinear regression with an application to climate modeling
 Ann. Appl. Statist
, 2008
"... Nonlinear regression is a useful statistical tool, relating observed data and a nonlinear function of unknown parameters. When the parameterdependent nonlinear function is computationally intensive, a straightforward regression analysis by maximum likelihood is not feasible. The method presented in ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Nonlinear regression is a useful statistical tool, relating observed data and a nonlinear function of unknown parameters. When the parameterdependent nonlinear function is computationally intensive, a straightforward regression analysis by maximum likelihood is not feasible. The method presented in this paper proposes to construct a faster running surrogate for such a computationally intensive nonlinear function, and to use it in a related nonlinear statistical model that accounts for the uncertainty associated with this surrogate. A pivotal quantity in the Earth’s climate system is the climate sensitivity: the change in global temperature due to doubling of atmospheric CO2 concentrations. This, along with other climate parameters, are estimated by applying the statistical method developed in this paper, where the computationally intensive nonlinear function is the MIT 2D climate model. 1. Introduction. A fundamental question in understanding the Earth’s
Detecting Near Linearity in High Dimensions
, 1998
"... This paper presents a quasiregression method for determining the degree of linearity in a function. Quasiregression estimates regression coefficients without matrix inversion. For a given number n of observations, quasiregression is usually less efficient than ordinary regression. But for function ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This paper presents a quasiregression method for determining the degree of linearity in a function. Quasiregression estimates regression coefficients without matrix inversion. For a given number n of observations, quasiregression is usually less efficient than ordinary regression. But for functions of d variables, the cost of linear regression grows as O(nd
Value of information for complex costeffectiveness models
, 2002
"... We show how to perform a sensitivity analysis on an economic model, within the framework of expected value of perfect information (EVPI), based on a relatively small number of model runs. The method is considerably more efficient than Monte Carlo methods, and will be ideally suited to computational ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We show how to perform a sensitivity analysis on an economic model, within the framework of expected value of perfect information (EVPI), based on a relatively small number of model runs. The method is considerably more efficient than Monte Carlo methods, and will be ideally suited to computationally expensive economic models; any model that requires a nontrivial computing time for one run of the model at a single set of input parameters. The basis of the approach involves the use of Gaussian processes to obtain fast approximations to the model itself and the expectations required for calculating EVPIs. We demonstrate the power of our method with a model for evaluating different treatment strategies of gastroespohageal reflux disease.