Results 1  10
of
110
Factorial Sampling Plans for Preliminary Computational Experiments
 Technometrics
, 1991
"... A computational mode / is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from comp ..."
Abstract

Cited by 150 (0 self)
 Add to MetaCart
A computational mode / is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from computational models that are deterministic, complicated enough to make classical mathematical analysis impractical and that have a moderatetolarge number of inputs. The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans are composed of individually randomized onefactoratatime designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input. Advantages of this approach include a lack of reliance on assumptions of relative sparsity of important inputs, monotonicity of outputs with respect to inputs, or adequacy of a loworder polynomial as an approximation to the computational model.
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Latin Hypercube Sampling and the propagation of uncertainty in analyses of complex systems,” Reliability Engineering and System Safety
, 2003
"... ..."
Flexibility and Efficiency Enhancements for Constrained Global Design Optimization with Kriging Approximations
, 2002
"... ..."
Uncertainty Analysis and other Inference Tools for Complex Computer Codes
, 1998
"... This paper builds on work by Haylock and O'Hagan which developed a Bayesian approach to uncertainty analysis. The generic problem is to make posterior inference about the output of a complex computer code, and the specific problem of uncertainty analysis is to make inference when the "true ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
This paper builds on work by Haylock and O'Hagan which developed a Bayesian approach to uncertainty analysis. The generic problem is to make posterior inference about the output of a complex computer code, and the specific problem of uncertainty analysis is to make inference when the "true" values of the input parameters are unknown. Given the distribution of the input parameters (which is often a subjective distribution derived from expert opinion), we wish to make inference about the implied distribution of the output. The computer code is sufficiently complex that the time to compute the output for any input configuration is substantial. The Bayesian approach was shown to improve dramatically on the classical approach, which is based on drawing a sample of values of the input parameters and thereby obtaining a sample from the output distribution. We review the basic Bayesian approach to the generic problem of inference for complex computer codes, and present some recent advancesinference about the distribution of quantile functions of the uncertainty distribution, calibration of models, and the use of runs of the computer code at different levels of complexity to make efficient use of the quicker, cruder, versions of the code. The emphasis is on practical applications. Keywords: COMPUTATIONAL EXPERIMENT; SIMULATION; GAUSSIAN PROCESS; SENSITIVITY ANALYSIS; UNCERTAINTY DISTRIBUTION; CALIBRATION; MULTILEVEL CODES; MODEL INADEQUACY. 1. INTRODUCTION 1.1. Complex computer codes
www.niss.org Choosing the Sample Size of a Computer Experiment: A Practical Guide
, 2008
"... We produce reasons and evidence supporting the informal rule that the number of runs for an effective initial computer experiment should be about 10 times the input dimension. Our arguments quantify two key characteristics of computer codes that affect the sample size required for a desired level of ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
(Show Context)
We produce reasons and evidence supporting the informal rule that the number of runs for an effective initial computer experiment should be about 10 times the input dimension. Our arguments quantify two key characteristics of computer codes that affect the sample size required for a desired level of accuracy when approximating the code via a Gaussian process (GP). The first characteristic is the total sensitivity of a code output variable to all input variables. The second corresponds to the way this total sensitivity is distributed across the input variables, specifically the possible presence of a few prominent input factors and many impotent ones (effect sparsity). Both measures relate directly to the correlation structure in the GP approximation of the code. In this way, the article moves towards a more formal treatment of sample size for a computer experiment. The evidence supporting these arguments stems primarily from a simulation study and via specific codes modeling climate and ligand activation of Gprotein.
Algorithmic construction of optimal symmetric Latin hypercube designs
 JOURNAL OF STATISTICAL PLANNING AND INFERENCES
, 2000
"... We propose symmetric Latin hypercubes for designs of computer exepriment. The goal is to offer a compromise between computing effort and design optimality. The proposed class of designs has some advantages over the regular Latin hypercube design with respect to criteria such as entropy and the minim ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
We propose symmetric Latin hypercubes for designs of computer exepriment. The goal is to offer a compromise between computing effort and design optimality. The proposed class of designs has some advantages over the regular Latin hypercube design with respect to criteria such as entropy and the minimum intersite distance. An exchange algorithm is proposed for constructing optimal symmetric Latin hypercube designs. This algorithm is compared with two existing algorithms by Park (1994) and Morris and Mitchell (1995). Some examples, including a real case study in the automotive industry, are used to illustrate the performance of the new designs and the algorithms.
Design optimization of hierarchically decomposed multilevel system under uncertainty
 Proceedings of the ASME 2004 Design Engineering Technical Conferences, Salt Lake City, Utah, 28 September–2 October, DETC2004/DAC57357
, 2004
"... This paper presents a methodology for design optimization of decomposed systems in the presence of uncertainties. We extend the analytical target cascading (ATC) formulation to probabilistic design by treating stochastic quantities as random variables and parameters and posing reliabilitybased desi ..."
Abstract

Cited by 25 (12 self)
 Add to MetaCart
(Show Context)
This paper presents a methodology for design optimization of decomposed systems in the presence of uncertainties. We extend the analytical target cascading (ATC) formulation to probabilistic design by treating stochastic quantities as random variables and parameters and posing reliabilitybased design constraints. We model the propagation of uncertainty throughout the multilevel hierarchy of elements that comprise the decomposed system by using the advanced mean value (AMV) method to generate the required probability distributions of nonlinear responses. We utilize appropriate metamodeling techniques for simulationbased design problems. A simple yet illustrative hierarchical bilevel engine design problem is used to demonstrate the proposed methodology. 1
On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments
 in Design Theory and Methodology  DTM'C97
, 1997
"... Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design using orthogonal arrays to compute signal to noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design using orthogonal arrays to compute signal to noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling, whereby statistical models are built to approximate detailedcomputer analysis codes. Although computers continue to get faster, our analysis codes always seem to keep pace, so that their computational time remains nontrivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating codes across different domains. In this paper we first review metamodeling techniques that encompass the Design of Experiments, Response Surface Methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.
Adaptative designs of experiments for accurate approximation of a target region
 Journal of Mechanical Design
, 2010
"... The objective of the present work is to provide a methodology to construct a design of experiments such that the metamodel accurately approximates the vicinity of a boundary in design space defined by a target value of the function of interest. Mourelatos et al. [5] used a combination of global and ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
The objective of the present work is to provide a methodology to construct a design of experiments such that the metamodel accurately approximates the vicinity of a boundary in design space defined by a target value of the function of interest. Mourelatos et al. [5] used a combination of global and local metamodels to first detect the critical regions and then obtain a locally accurate approximation. Ranjan et al. [6] proposed a modified version of the famous EGO algorithm (Efficient Global Optimization, [7]) to sequentially explore the domain region along a contour line. Tu et al. used a modified Doptimal strategy for boundaryfocused polynomial regression [8]. Vazquez and Bect [9] proposed an iterative strategy for accurate computation of a probability of failure based on Kriging. In this paper, we present an alternative criterion to choose sequentially the experiments, based on an explicit tradeoff between the exploration of the target region (on the vicinity of the contour line) and reduction of the global uncertainty (prediction variance) in the metamodel. The paper is organized as follows: in Section 2, the Kriging model and the framework of design of experiments are described. In Section 3, the original criterion of selecting experiments is presented, followed by its associated sequenhal00319385,