Results 1  10
of
41
Random number generation
"... Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables dis ..."
Abstract

Cited by 136 (30 self)
 Add to MetaCart
Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables distributed uniformly over the interval
A randomized quasiMonte Carlo simulation method for Markov chains
 Operations Research
, 2007
"... Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyunifor ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyuniform point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. This technique is effective in particular to obtain a lowvariance unbiased estimator of the expected total cost up to some random stopping time, when statedependent costs are paid at each step. It is generally more effective when the state space has a natural order related to the cost function. We provide numerical illustrations where the variance reduction with respect to standard Monte Carlo is substantial. The variance can be reduced by factors of several thousands in some cases. We prove bounds on the convergence rate of the worstcase error and variance for special situations. In line with what is typically observed in randomized quasiMonte Carlo contexts, our empirical results indicate much better convergence than what these bounds guarantee.
SSJ: A Framework for Stochastic Simulation in Java
 PROCEEDINGS OF THE 2002 WINTER SIMULATION CONFERENCE
, 2002
"... We introduce SSJ, an organized set of software tools implemented in the Java programming language and offering generalpurpose facilities for stochastic simulation programming. It supports the event view, process view, continuous simulation, and arbitrary mixtures of these. Performance, flexibility, ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We introduce SSJ, an organized set of software tools implemented in the Java programming language and offering generalpurpose facilities for stochastic simulation programming. It supports the event view, process view, continuous simulation, and arbitrary mixtures of these. Performance, flexibility, and extensibility were key criteria in its design and implementation. We illustrate its use by simple examples and discuss how we dealt with some performance issues in the implementation.
Probabilistic error bounds for simulation quantile estimators
 Management Science
, 2003
"... Quantile estimation has become increasingly important,particularly in the financial industry,where value at risk (VaR) has emerged as a standard measurement tool for controlling portfolio risk. In this paper,we analyze the probability that a simulationbased quantile estimator fails to lie in a pres ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Quantile estimation has become increasingly important,particularly in the financial industry,where value at risk (VaR) has emerged as a standard measurement tool for controlling portfolio risk. In this paper,we analyze the probability that a simulationbased quantile estimator fails to lie in a prespecified neighborhood of the true quantile. First,we show that this error probability converges to zero exponentially fast with sample size for negatively dependent sampling. Then we consider stratified quantile estimators and show that the error probability for these estimators can be guaranteed to be 0 with sufficiently large,but finite,sample size. These estimators,however,require sample sizes that grow exponentially in the problem dimension. Numerical experiments on a simple VaR example illustrate the potential for variance reduction.
Polynomial Integration Lattices
"... Lattice rules are quasiMonte Carlo methods for estimating largedimensional integrals over the unit hypercube. In this paper, after briefly reviewing key ideas of quasiMonte Carlo methods, we give an overview of recent results, generalize them, and provide several new results, for lattice rules de ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Lattice rules are quasiMonte Carlo methods for estimating largedimensional integrals over the unit hypercube. In this paper, after briefly reviewing key ideas of quasiMonte Carlo methods, we give an overview of recent results, generalize them, and provide several new results, for lattice rules defined in spaces of polynomials and of formal series with coeffocients in a finite ring. We discuss basic properties, implementations, a randomized version, and quality criteria (i.e., measures of uniformity) for selecting the parameters. Two types of polynomial lattice rules are examined: dimensionwise lattices and resolutionwise lattices. These rules turn out to be special cases of digital net constructions, which we reinterpret as yet another type of lattice in a space of formal series. Our development underlines the connections between integration lattices and digital nets.
QuasiMonte Carlo sampling to improve the efficiency of Monte Carlo EM
 Computational Statistics and Data Analysis
, 2005
"... In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on QuasiMonte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (ExpectationMaximization) algorithm in which an intractable Estep is replaced by a Monte Carlo ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on QuasiMonte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (ExpectationMaximization) algorithm in which an intractable Estep is replaced by a Monte Carlo approximation. QuasiMonte Carlo methods produce deterministic sequences of points that can significantly improve the accuracy of Monte Carlo approximations over purely random sampling. One drawback to deterministic QuasiMonte Carlo methods is that it is generally difficult to determine the magnitude of the approximation error. However, in order to implement the Monte Carlo EM algorithm in an automated way, the ability to measure this error is fundamental. Recent developments of randomized QuasiMonte Carlo methods can overcome this drawback. We investigate the implementation of an automated, datadriven Monte Carlo EM algorithm based on randomized QuasiMonte Carlo methods. We apply this algorithm to a geostatistical model of online purchases and find that it can significantly decrease the total simulation effort, thus showing great potential for improving upon the efficiency of the classical Monte Carlo EM algorithm. Key words and phrases: Monte Carlo error; lowdiscrepancy sequence; Halton sequence; EM algorithm; geostatistical model.
DCMA, yet another derandomization in CovarianceMatrixAdaptation
 GECCO'07
, 2007
"... In a preliminary part of this paper, we analyze the necessity of randomness in evolution strategies. We conclude to the necessity of ”continuous”randomness, but with a much more limited use of randomness than what is commonly used in evolution strategies. We then apply these results to CMAES, a fa ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In a preliminary part of this paper, we analyze the necessity of randomness in evolution strategies. We conclude to the necessity of ”continuous”randomness, but with a much more limited use of randomness than what is commonly used in evolution strategies. We then apply these results to CMAES, a famous evolution strategy already based on the idea of derandomization, which uses random independent Gaussian mutations. We here replace these random independent Gaussian mutations by a quasirandom sample. The modification is very easy to do, the modified algorithm is computationally more efficient and its convergence is faster in terms of the number of iterates for a given precision.
ACTIVE LEARNING IN REGRESSION, WITH APPLICATION TO STOCHASTIC DYNAMIC PROGRAMMING
"... Abstract: We study active learning as a derandomized form of sampling. We show that full derandomization is not suitable in a robust framework, propose partially derandomized samplings, and develop new active learning methods (i) in which expert knowledge is easy to integrate (ii) with a parameter f ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract: We study active learning as a derandomized form of sampling. We show that full derandomization is not suitable in a robust framework, propose partially derandomized samplings, and develop new active learning methods (i) in which expert knowledge is easy to integrate (ii) with a parameter for the exploration/exploitation dilemma (iii) less randomized than the fullrandom sampling (yet also not deterministic). Experiments are performed in the case of regression for valuefunction learning on a continuous domain. Our main results are (i) efficient partially derandomized point sets (ii) moderatederandomization theorems (iii) experimental evidence of the importance of the frontier (iv) a new regressionspecific userfriendly sampling tool lessrobust than blind samplers but that sometimes works very efficiently in large dimensions. All experiments can be reproduced by downloading the source code and running the provided command line. 1
Combination of General Antithetic Transformations and Control Variables
, 2003
"... Several methods for reducing the variance in the context of Monte Carlo simulation are based on correlation induction. This includes antithetic variates, Latin hypercube sampling, and randomized version of quasiMonte Carlo methods such as lattice rules and digital nets, where the resulting estimato ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Several methods for reducing the variance in the context of Monte Carlo simulation are based on correlation induction. This includes antithetic variates, Latin hypercube sampling, and randomized version of quasiMonte Carlo methods such as lattice rules and digital nets, where the resulting estimators are usually weighted averages of several dependent random variables that can be seen as function evaluations at a finite set of random points in the unit hypercube. In this paper, we consider a setting where these methods can be combined with the use of control variates and we provide conditions under which we can formally prove that the variance is minimized by choosing equal weights and equal control variate coefficients across the di#erent points of evaluation, regardless of the function (integrand) that is evaluated.