Results 1  10
of
31
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems
, 2002
"... ..."
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...
Monte Carlo Variance of Scrambled Net Quadrature
 SIAM J. Numer. Anal
, 1997
"... . Hybrids of equidistribution and Monte Carlo methods of integration can achieve the superior accuracy of the former while allowing the simple error estimation methods of the latter. This paper studies the variance of one such hybrid, scrambled nets, by applying a multidimensional multiresolution (w ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
. Hybrids of equidistribution and Monte Carlo methods of integration can achieve the superior accuracy of the former while allowing the simple error estimation methods of the latter. This paper studies the variance of one such hybrid, scrambled nets, by applying a multidimensional multiresolution (wavelet) analysis to the integrand. The integrand is assumed to be measurable and square integrable but not necessarily of bounded variation. In simple Monte Carlo, every nonconstant term of the multiresolution contributes to the variance of the estimated integral. For scrambled nets, certain lowdimensional and coarse terms do not contribute to the variance. For any integrand in L 2 , the sampling variance tends to zero faster under scrambled net quadrature than under Monte Carlo sampling, as the number of function evaluations n tends to infinity. Some finite n results bound the variance under scrambled net quadrature by a small constant multiple of the Monte Carlo variance, uniformly ove...
Integrated variance reduction strategies for simulation
 Operations Research
, 1996
"... We develop strategies for integrated use of certain wellknown variance reduction techniques to estimate a mean response in a finitehorizon simulation experiment. The building blocks for these integrated variance reduction strategies are the techniques of conditional expectation, correlation induc ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
We develop strategies for integrated use of certain wellknown variance reduction techniques to estimate a mean response in a finitehorizon simulation experiment. The building blocks for these integrated variance reduction strategies are the techniques of conditional expectation, correlation induction (including antithetic variates and Latin hypercube sampling), and control variates; and all pairings of these techniques are examined. For each integrated strategy, we establish sufficient conditions under which that strategy will yield a smaller response variance than its constituent variance reduction techniques will yield individually. We also provide asymptotic variance comparisons between many of the methods discussed, with emphasis on integrated strategies that incorporate Latin hypercube sampling. An experimental performance evaluation reveals that in the simulation of stochastic activity networks, substantial variance reductions can be achieved with these integrated strategies. Both the theoretical and experimental results indicate that superior performance is obtained via joint application of the techniques of conditional expectation and Latin hypercube sampling. Subject classifications: Simulation, efficiency: conditioning, control variates, correlation inArea of review: Simulation.
MULTIPROCESS PARALLEL ANTITHETIC COUPLING FOR BACKWARD AND FORWARD Markov Chain Monte Carlo
, 2005
"... Antithetic coupling is a general stratification strategy for reducing Monte Carlo variance without increasing the simulation size. The use of the antithetic principle in the Monte Carlo literature typically employs two strata via antithetic quantile coupling. We demonstrate here that further stratif ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Antithetic coupling is a general stratification strategy for reducing Monte Carlo variance without increasing the simulation size. The use of the antithetic principle in the Monte Carlo literature typically employs two strata via antithetic quantile coupling. We demonstrate here that further stratification, obtained by using k>2(e.g.,k = 3–10) antithetically coupled variates, can offer substantial additional gain in Monte Carlo efficiency, in terms of both variance and bias. The reason for reduced bias is that antithetically coupled chains can provide a more dispersed search of the state space than multiple independent chains. The emerging area of perfect simulation provides a perfect setting for implementing the kprocess parallel antithetic coupling for MCMC because, without antithetic coupling, this class of methods delivers genuine independent draws. Furthermore, antithetic backward coupling provides a very convenient theoretical tool for investigating antithetic forward coupling. However, the generation of k>2 antithetic variates that are negatively associated, that is, they preserve negative correlation under monotone
On rates of convergence for stochastic optimization problems under nonI.I.D. sampling
, 2006
"... In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well studied problem in case the samples are ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo is used); here, we study the case where that assumption is dropped. Broadly speaking, our results show that, under appropriate assumptions, the rates of convergence for pointwise estimators under a sampling scheme carry over to the optimization case, in the sense that convergence of approximating optimal solutions and optimal values to their true counterparts has the same rates as in pointwise estimation. Our motivation for the study arises from two types of sampling methods that have been widely used in the Statistics literature. One is Latin Hypercube Sampling (LHS), a stratified sampling method originally proposed in the seventies by McKay, Beckman, and Conover (1979). The other is the class of quasiMonte Carlo (QMC) methods, which have become popular especially after the work of Niederreiter (1992). The advantage of such methods is that they typically yield pointwise estimators which not only have lower variance than standard Monte Carlo but also possess better rates of convergence. Thus, it is important to study the use of these techniques in samplingbased optimization. The novelty of our work arises from the fact that, while there has been some work on the use of variance reduction techniques and QMC methods in stochastic optimization, none of the existing work — to the best of our knowledge — has provided a theoretical study on the effect of these techniques on rates of convergence for the optimization problem. We present numerical results for some twostage stochastic programs from the literature to illustrate the discussed ideas.
Centered L2discrepancy of Random Sampling and Latin Hypercube Design, and Construction of Uniform Designs
 Mathematics of Computation
, 2000
"... Abstract. In this paper properties and construction of designs under a centered version of the L2discrepancy are analyzed. The theoretic expectation and variance of this discrepancy are derived for random designs and Latin hypercube designs. The expectation and variance of Latin hypercube designs a ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract. In this paper properties and construction of designs under a centered version of the L2discrepancy are analyzed. The theoretic expectation and variance of this discrepancy are derived for random designs and Latin hypercube designs. The expectation and variance of Latin hypercube designs are significantly lower than that of random designs. While in dimension one the unique uniform design is also a set of equidistant points, lowdiscrepancy designs in higher dimension have to be generated by explicit optimization. Optimization is performed using the threshold accepting heuristic which produces low discrepancy designs compared to theoretic expectation and variance. 1.
Control variates for quasiMonte Carlo
, 2003
"... QuasiMonte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can, ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
QuasiMonte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can,
Assessing Linearity in High Dimensions
, 2000
"... This paper presents a quasiregression method for determining the degree of linearity in a function, where the cost grows only as nd. A bias corrected version of quasiregression is able to estimate the degree of linearity with a sample size of order d ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper presents a quasiregression method for determining the degree of linearity in a function, where the cost grows only as nd. A bias corrected version of quasiregression is able to estimate the degree of linearity with a sample size of order d