Results 1  10
of
31
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Latin Hypercube Sampling and the propagation of uncertainty in analyses of complex systems,” Reliability Engineering and System Safety
, 2003
"... ..."
www.niss.org Choosing the Sample Size of a Computer Experiment: A Practical Guide
, 2008
"... We produce reasons and evidence supporting the informal rule that the number of runs for an effective initial computer experiment should be about 10 times the input dimension. Our arguments quantify two key characteristics of computer codes that affect the sample size required for a desired level of ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
(Show Context)
We produce reasons and evidence supporting the informal rule that the number of runs for an effective initial computer experiment should be about 10 times the input dimension. Our arguments quantify two key characteristics of computer codes that affect the sample size required for a desired level of accuracy when approximating the code via a Gaussian process (GP). The first characteristic is the total sensitivity of a code output variable to all input variables. The second corresponds to the way this total sensitivity is distributed across the input variables, specifically the possible presence of a few prominent input factors and many impotent ones (effect sparsity). Both measures relate directly to the correlation structure in the GP approximation of the code. In this way, the article moves towards a more formal treatment of sample size for a computer experiment. The evidence supporting these arguments stems primarily from a simulation study and via specific codes modeling climate and ligand activation of Gprotein.
OrthogonalMaximin Latin Hypercube Designs
"... A randomly generated Latin hypercube design (LHD) can be quite structured: the variables may be highly correlated or the design may not have good spacefilling properties. There are procedures to find good LHDs by minimizing the pairwise correlations or maximizing the intersite distances. In this a ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
A randomly generated Latin hypercube design (LHD) can be quite structured: the variables may be highly correlated or the design may not have good spacefilling properties. There are procedures to find good LHDs by minimizing the pairwise correlations or maximizing the intersite distances. In this article we have shown that these two criteria need not agree with each other. In fact, maximization of intersite distances can result in LHDs where the variables are highly correlated and vice versa. Therefore, we propose a multiobjective optimization approach to find good LHDs by combining correlation and distance performance measures. We also propose a new exchange algorithm for efficiently generating such designs. Several examples are presented to show that the new algorithm is fast and that the optimal designs are good in terms of both the correlation and distance criteria.
Centered L2discrepancy of Random Sampling and Latin Hypercube Design, and Construction of Uniform Designs
 Mathematics of Computation
, 2000
"... Abstract. In this paper properties and construction of designs under a centered version of the L2discrepancy are analyzed. The theoretic expectation and variance of this discrepancy are derived for random designs and Latin hypercube designs. The expectation and variance of Latin hypercube designs a ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper properties and construction of designs under a centered version of the L2discrepancy are analyzed. The theoretic expectation and variance of this discrepancy are derived for random designs and Latin hypercube designs. The expectation and variance of Latin hypercube designs are significantly lower than that of random designs. While in dimension one the unique uniform design is also a set of equidistant points, lowdiscrepancy designs in higher dimension have to be generated by explicit optimization. Optimization is performed using the threshold accepting heuristic which produces low discrepancy designs compared to theoretic expectation and variance. 1.
Prediction Intervals for Neural Networks Via Nonlinear Regression
, 1998
"... Standard methods for computing prediction intervals in nonlinear regression can be effectively applied to neural networks when the number of training points is large. However, simulations show that thses methods can generate unreliable prediction intervals on smaller data sets because when the netwo ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Standard methods for computing prediction intervals in nonlinear regression can be effectively applied to neural networks when the number of training points is large. However, simulations show that thses methods can generate unreliable prediction intervals on smaller data sets because when the network is trained to convergence. Stopping the training algorithm prior to convergence, to avoid overfitting, reduces the effective number of parameters effectively, but can lead to prediction intervals that are too wide. We present an alternative approach to estimating prediction intervals which uses weight decay to fit the network and show that this method is effective on a wide range of problems. KEY WORDS: Nonparametric regression, smoothing, highdimensional data, backpropagation. 1 Introduction Multilayer feedforward neural networks are flexible models that are widely used to model highdimensional, nonlinear data. The models typically contain many parameters, sometimes as many or more p...
Nested Latin Hypercube Design
 Biometrika
, 2009
"... We propose an approach to constructing nested Latin hypercube designs. Such designs are useful for conducting multiple computer experiments with different levels of accuracy. A nested Latin hypercube design with two layers is defined to be a special Latin hypercube design that contains a smaller Lat ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We propose an approach to constructing nested Latin hypercube designs. Such designs are useful for conducting multiple computer experiments with different levels of accuracy. A nested Latin hypercube design with two layers is defined to be a special Latin hypercube design that contains a smaller Latin hypercube design as a subset. Our method is easy to implement and can accommodate any number of factors. We also extend this method to construct nested Latin hypercube designs with more than two layers. Illustrative examples are given. Some statistical properties of the constructed designs are derived.
WrapAround L_2discrepancy of Random Sampling, Latin Hypercube and Uniform Designs
"... this paper consider their mean and variance under a wraparound version of the L 2 { discrepancy (WD). The theoretic expectation and variance of this discrepancy are derived for these two designs. The expectation and variance of Latin hypercube designs are signicantly lower than that of the correspo ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper consider their mean and variance under a wraparound version of the L 2 { discrepancy (WD). The theoretic expectation and variance of this discrepancy are derived for these two designs. The expectation and variance of Latin hypercube designs are signicantly lower than that of the corresponding random designs. We also study construction of the uniform design under the WD and show that onedimensional uniform design under this discrepancy can be any set of equidistant points. For high dimensional uniform designs we apply the threshold accepting heuristic for nding low discrepancy designs. We also show that the conjecture proposed by Fang, Lin, Winker and Zhang (1999) is true under the WD when the design is complete. Key Words: Latin hypercube design, Quasi MonteCarlo methods, threshold accepting heuristic, uniform design, wraparound discrepancy.
Detecting Near Linearity in High Dimensions
, 1998
"... This paper presents a quasiregression method for determining the degree of linearity in a function. Quasiregression estimates regression coefficients without matrix inversion. For a given number n of observations, quasiregression is usually less efficient than ordinary regression. But for function ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This paper presents a quasiregression method for determining the degree of linearity in a function. Quasiregression estimates regression coefficients without matrix inversion. For a given number n of observations, quasiregression is usually less efficient than ordinary regression. But for functions of d variables, the cost of linear regression grows as O(nd
On Reduction of Finite Sample Variance by Extended Latin Hypercube Sampling
 Bernoulli
, 1997
"... McKay, Conover and Beckman (1979) introduced Latin hypercube sampling (LHS) for reducing variance of Monte Carlo simulations. More recently Owen (1992a) and Tang (1993) generalized LHS using orthogonal arrays. In the Owen's class of generalized LHS, we define extended Latin hypercube sampling o ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
McKay, Conover and Beckman (1979) introduced Latin hypercube sampling (LHS) for reducing variance of Monte Carlo simulations. More recently Owen (1992a) and Tang (1993) generalized LHS using orthogonal arrays. In the Owen's class of generalized LHS, we define extended Latin hypercube sampling of strength m (henceforth denoted as ELHS(m)), such that ELHS(1) reduces to LHS. We first derive explicit formula for the finite sample variance of ELHS(m) by detailed investigation of combinatorics involved in ELHS(m). Based on this formula, we give a sufficient condition for variance reduction by ELHS(m), generalizing similar result of McKay, Conover and Beckman (1979) for m = 1. Actually our sufficient condition for m = 1 contains the sufficient condition by McKay, Conover and Beckman (1979) and thus strengthens their result. 1 INTRODUCTION Monte Carlo simulation is often used to evaluate the expectation of a statistic W = g(X 1 ; . . . ; XK ), which is not analytically tractable. In usual M...