Results 1  10
of
50
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Accelerating Evolutionary Algorithms with Gaussian Process Fitness Function Models
 IEEE Transactions on Systems, Man and Cybernetics
, 2004
"... We present an overview of evolutionary algorithms that use empirical models of the fitness function to accelerate convergence, distinguishing between Evolution Control and the Surrogate Approach. We describe the Gaussian process model and propose using it as an inexpensive fitness function surrogate ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
We present an overview of evolutionary algorithms that use empirical models of the fitness function to accelerate convergence, distinguishing between Evolution Control and the Surrogate Approach. We describe the Gaussian process model and propose using it as an inexpensive fitness function surrogate. Implementation issues such as efficient and numerically stable computation, exploration vs. exploitation, local modeling, multiple objectives and constraints, and failed evaluations are addressed. Our resulting Gaussian Process Optimization Procedure (GPOP) clearly outperforms other evolutionary strategies on standard test functions as well as on a realworld problem: the optimization of stationary gas turbine compressor profiles.
Geoadditive Models
, 2000
"... this paper is a recent article on modelbased geostatistics by Diggle, Tawn and Moyeed (1998) where pure kriging (i.e. no covariates) is the focus. Our paper inherits some of its aspects: modelbased and with mixed model connections. In particular the comment by Bowman (1998) in the ensuing discussi ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
this paper is a recent article on modelbased geostatistics by Diggle, Tawn and Moyeed (1998) where pure kriging (i.e. no covariates) is the focus. Our paper inherits some of its aspects: modelbased and with mixed model connections. In particular the comment by Bowman (1998) in the ensuing discussion suggested that additive modelling would be a worthwhile extension. This paper essentially follows this suggestion. However, this paper is not the first to combine the notions of geostatistics and additive modelling. References known to us are Kelsall and Diggle (1998), Durban Reguera (1998) and Durban, Hackett, Currie and Newton (2000). Nevertheless, we believe that our approach has a number of attractive features (see (1)(4) above), not all shared by these references. Section 2 describes the motivating application and data in detail. Section 3 shows how one can express additive models as a mixed model, while Section 4 does the same for kriging and merges the two into the geoadditive model. Issues concerning the amount of smoothing are discussed in Section 5 and inferential aspects are treated in Section 6. Our analysis of the Upper Cape Cod reproductive data is presented in Section 7. Section 8 discusses extension to the generalised context.We close the paper with some disussion in Section 9. 2 Description of the application and data
Sampling Strategies for Computer Experiments: Design and Analysis
, 2001
"... Computerbased simulation and analysis is used extensively in engineering for a variety of tasks. Despite the steady and continuing growth of computing power and speed, the computational cost of complex highfidelity engineering analyses and simulations limit their use in important areas like design ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Computerbased simulation and analysis is used extensively in engineering for a variety of tasks. Despite the steady and continuing growth of computing power and speed, the computational cost of complex highfidelity engineering analyses and simulations limit their use in important areas like design optimization and reliability analysis. Statistical approximation techniques such as design of experiments and response surface methodology are becoming widely used in engineering to minimize the computational expense of running such computer analyses and circumvent many of these limitations. In this paper, we compare and contrast five experimental design types and four approximation model types in terms of their capability to generate accurate approximations for two engineering applications with typical engineering behaviors and a wide range of nonlinearity. The first example involves the analysis of a twomember frame that has three input variables and three responses of interest. The second example simulates the rollover potential of a semitractortrailer for different combinations of input variables and braking and steering levels. Detailed error analysis reveals that uniform designs provide good sampling for generating accurate approximations using different sample sizes while kriging models provide accurate approximations that are robust for use with a variety of experimental designs and sample sizes.
Design optimization of hierarchically decomposed multilevel system under uncertainty
 Proceedings of the ASME 2004 Design Engineering Technical Conferences, Salt Lake City, Utah, 28 September–2 October, DETC2004/DAC57357
, 2004
"... This paper presents a methodology for design optimization of decomposed systems in the presence of uncertainties. We extend the analytical target cascading (ATC) formulation to probabilistic design by treating stochastic quantities as random variables and parameters and posing reliabilitybased desi ..."
Abstract

Cited by 16 (12 self)
 Add to MetaCart
This paper presents a methodology for design optimization of decomposed systems in the presence of uncertainties. We extend the analytical target cascading (ATC) formulation to probabilistic design by treating stochastic quantities as random variables and parameters and posing reliabilitybased design constraints. We model the propagation of uncertainty throughout the multilevel hierarchy of elements that comprise the decomposed system by using the advanced mean value (AMV) method to generate the required probability distributions of nonlinear responses. We utilize appropriate metamodeling techniques for simulationbased design problems. A simple yet illustrative hierarchical bilevel engine design problem is used to demonstrate the proposed methodology. 1
Asymptotic optimality of multicenter Voronoi configurations for random field estimation
 IEEE Transactions on Automatic Control
, 2008
"... This paper deals with multiagent networks performing optimal estimation tasks. Consider a network of mobile agents with sensors that can take measurements of a spatial process in an environment of interest. Using the measurements, one can construct a kriging interpolation of the spatial field over ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
This paper deals with multiagent networks performing optimal estimation tasks. Consider a network of mobile agents with sensors that can take measurements of a spatial process in an environment of interest. Using the measurements, one can construct a kriging interpolation of the spatial field over the whole environment, with an associated prediction error at each point. We study the continuity properties of the prediction error, and consider as global objective functions the maximum prediction error and the generalized prediction variance. We study the network configurations that give rise to optimal field interpolations. Specifically, we show how, as the correlation between any two different locations vanishes, circumcenter and incenter Voronoi configurations become network configurations that optimize the maximum prediction error and the generalized prediction variance, respectively. The technical approach draws on tools from geostatistics, computational geometry, linear algebra, and dynamical systems. I.
Algorithmic construction of optimal symmetric Latin hypercube designs
 JOURNAL OF STATISTICAL PLANNING AND INFERENCES
, 2000
"... We propose symmetric Latin hypercubes for designs of computer exepriment. The goal is to offer a compromise between computing effort and design optimality. The proposed class of designs has some advantages over the regular Latin hypercube design with respect to criteria such as entropy and the minim ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We propose symmetric Latin hypercubes for designs of computer exepriment. The goal is to offer a compromise between computing effort and design optimality. The proposed class of designs has some advantages over the regular Latin hypercube design with respect to criteria such as entropy and the minimum intersite distance. An exchange algorithm is proposed for constructing optimal symmetric Latin hypercube designs. This algorithm is compared with two existing algorithms by Park (1994) and Morris and Mitchell (1995). Some examples, including a real case study in the automotive industry, are used to illustrate the performance of the new designs and the algorithms.
An Algorithm for the Construction of Spatial Coverage Designs with Implementation in SPLUS
 in Splus, Computers and Geosciences
, 1998
"... Spacefilling "coverage" designs are spatial sampling plans which optimize a distancebased criterion. Because they do not depend on the covariance structure of the process to be sampled, coverage designs are more efficiently computed than designs which are optimal for mean squared error criteria. T ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Spacefilling "coverage" designs are spatial sampling plans which optimize a distancebased criterion. Because they do not depend on the covariance structure of the process to be sampled, coverage designs are more efficiently computed than designs which are optimal for mean squared error criteria. This paper presents an efficient algorithm for the construction of coverage designs and evaluates it's performance in terms of computation time and effectiveness at finding "good" designs. Results suggest that nearoptimal designs for reasonably large problems can be computed very efficiently. The algorithm is implemented in the statistical programming language SPLUS, and examples of the construction of coverage designs are given involving an existing network of ozone monitoring sites. Keywords: Spatial design, spacefilling designs, spatial statistics, spatial sampling, network design. Address: Box 3000, Boulder, Colorado 80307. (303) 4971704. 1 Introduction A practical problem in spat...
Measuring the Goodness of Orthogonal Array Discretizations for HighDimensional ContinuousState Stochastic Dynamic Programs
 SIAM JOURNAL OF OPTIMIZATION
, 2001
"... This paper describes a state space discretization scheme based on statistical experimental designs generated from orthogonal arrays of strength three with index unity. Chen et al. (1999) employed this efficient discretization scheme in a numerical solution method for highdimensional continuousstat ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
This paper describes a state space discretization scheme based on statistical experimental designs generated from orthogonal arrays of strength three with index unity. Chen et al. (1999) employed this efficient discretization scheme in a numerical solution method for highdimensional continuousstate stochastic dynamic programming (SDP). These OAs are instrumental in reducing the dimensionality of continuousstate SDP. In particular, computationally efficient spacefilling measures for these OAs are derived for evaluating how well a specific OA discretization fills the state space. Comparisons were made with two types of common measures: ones which maximize the average (or minimum) distance between discretization points within the OA and ones which minimize the average (or maximum) distance between discretization points and nondiscretization points lying on a full grid (i.e., points lying on the full grid that are not contained in the OA discretization). OAs of strength three were test...
Nested Latin Hypercube Design
 Biometrika
, 2009
"... We propose an approach to constructing nested Latin hypercube designs. Such designs are useful for conducting multiple computer experiments with different levels of accuracy. A nested Latin hypercube design with two layers is defined to be a special Latin hypercube design that contains a smaller Lat ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We propose an approach to constructing nested Latin hypercube designs. Such designs are useful for conducting multiple computer experiments with different levels of accuracy. A nested Latin hypercube design with two layers is defined to be a special Latin hypercube design that contains a smaller Latin hypercube design as a subset. Our method is easy to implement and can accommodate any number of factors. We also extend this method to construct nested Latin hypercube designs with more than two layers. Illustrative examples are given. Some statistical properties of the constructed designs are derived.