Results 1  10
of
62
SSJ: A Framework for Stochastic Simulation in Java
 PROCEEDINGS OF THE 2002 WINTER SIMULATION CONFERENCE
, 2002
"... We introduce SSJ, an organized set of software tools implemented in the Java programming language and offering generalpurpose facilities for stochastic simulation programming. It supports the event view, process view, continuous simulation, and arbitrary mixtures of these. Performance, flexibility, ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We introduce SSJ, an organized set of software tools implemented in the Java programming language and offering generalpurpose facilities for stochastic simulation programming. It supports the event view, process view, continuous simulation, and arbitrary mixtures of these. Performance, flexibility, and extensibility were key criteria in its design and implementation. We illustrate its use by simple examples and discuss how we dealt with some performance issues in the implementation.
A randomized quasiMonte Carlo simulation method for Markov chains
 Operations Research
, 2007
"... Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyunifor ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyuniform point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. This technique is effective in particular to obtain a lowvariance unbiased estimator of the expected total cost up to some random stopping time, when statedependent costs are paid at each step. It is generally more effective when the state space has a natural order related to the cost function. We provide numerical illustrations where the variance reduction with respect to standard Monte Carlo is substantial. The variance can be reduced by factors of several thousands in some cases. We prove bounds on the convergence rate of the worstcase error and variance for special situations. In line with what is typically observed in randomized quasiMonte Carlo contexts, our empirical results indicate much better convergence than what these bounds guarantee.
Probabilistic error bounds for simulation quantile estimators
 Management Science
, 2003
"... Quantile estimation has become increasingly important,particularly in the financial industry,where value at risk (VaR) has emerged as a standard measurement tool for controlling portfolio risk. In this paper,we analyze the probability that a simulationbased quantile estimator fails to lie in a pres ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Quantile estimation has become increasingly important,particularly in the financial industry,where value at risk (VaR) has emerged as a standard measurement tool for controlling portfolio risk. In this paper,we analyze the probability that a simulationbased quantile estimator fails to lie in a prespecified neighborhood of the true quantile. First,we show that this error probability converges to zero exponentially fast with sample size for negatively dependent sampling. Then we consider stratified quantile estimators and show that the error probability for these estimators can be guaranteed to be 0 with sufficiently large,but finite,sample size. These estimators,however,require sample sizes that grow exponentially in the problem dimension. Numerical experiments on a simple VaR example illustrate the potential for variance reduction.
QuasiMonte Carlo sampling to improve the efficiency of Monte Carlo EM
 Computational Statistics and Data Analysis
, 2005
"... In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on QuasiMonte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (ExpectationMaximization) algorithm in which an intractable Estep is replaced by a Monte Carlo ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
In this paper we investigate an efficient implementation of the Monte Carlo EM algorithm based on QuasiMonte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version of the deterministic EM (ExpectationMaximization) algorithm in which an intractable Estep is replaced by a Monte Carlo approximation. QuasiMonte Carlo methods produce deterministic sequences of points that can significantly improve the accuracy of Monte Carlo approximations over purely random sampling. One drawback to deterministic QuasiMonte Carlo methods is that it is generally difficult to determine the magnitude of the approximation error. However, in order to implement the Monte Carlo EM algorithm in an automated way, the ability to measure this error is fundamental. Recent developments of randomized QuasiMonte Carlo methods can overcome this drawback. We investigate the implementation of an automated, datadriven Monte Carlo EM algorithm based on randomized QuasiMonte Carlo methods. We apply this algorithm to a geostatistical model of online purchases and find that it can significantly decrease the total simulation effort, thus showing great potential for improving upon the efficiency of the classical Monte Carlo EM algorithm. Key words and phrases: Monte Carlo error; lowdiscrepancy sequence; Halton sequence; EM algorithm; geostatistical model.
Acceleration of the multipletry Metropolis algorithm using antithetic and stratified sampling. Statistics and Computing 17:109
, 2007
"... The MultipleTry Metropolis is a recent extension of the Metropolis algorithm in which the next state of the chain is selected among a set of proposals. We propose a modification of the MultipleTry Metropolis algorithm which allows the use of correlated proposals, particularly antithetic and strat ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
The MultipleTry Metropolis is a recent extension of the Metropolis algorithm in which the next state of the chain is selected among a set of proposals. We propose a modification of the MultipleTry Metropolis algorithm which allows the use of correlated proposals, particularly antithetic and stratified proposals. The method is particularly useful for random walk Metropolis in high dimensional spaces and can be used easily when the proposal distribution is Gaussian. We explore the use of quasiMonte Carlo (QMC) methods to generate highly stratified samples. A series of examples is presented to evaluate the potential of the method.
The EM Algorithm, Its Stochastic Implementation and Global Optimization: Some Challenges and Opportunities for OR
, 2006
"... The EM algorithm is a very powerful optimization method and has reached popularity in many fields. Unfortunately, EM is only a local optimization method and can get stuck in suboptimal solutions. While more and more contemporary data/model combinations yield more than one optimum, there have been on ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The EM algorithm is a very powerful optimization method and has reached popularity in many fields. Unfortunately, EM is only a local optimization method and can get stuck in suboptimal solutions. While more and more contemporary data/model combinations yield more than one optimum, there have been only very few attempts at making EM suitable for global optimization. In this paper we review the basic EM algorithm, its properties and challenges and we focus in particular on its stochastic implementation. The stochastic EM implementation promises relief to some of the contemporary data/model challenges and it is particularly wellsuited for a wedding with global optimization ideas since most global optimization paradigms are also based on the principles of stochasticity. We review some of the challenges of the stochastic EM implementation and propose a new algorithm that combines the principles of EM with that of the Genetic Algorithm. While this new algorithm shows some promising results for clustering of an online auction database of functional objects, the primary goal of this work is to bridge a gap between the field of statistics, which is home to extensive research on the EM algorithm, and the field of operations research, in which work on global optimization thrives, and to stir new ideas for joint research between the two.
Polynomial Integration Lattices
"... Lattice rules are quasiMonte Carlo methods for estimating largedimensional integrals over the unit hypercube. In this paper, after briefly reviewing key ideas of quasiMonte Carlo methods, we give an overview of recent results, generalize them, and provide several new results, for lattice rules de ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Lattice rules are quasiMonte Carlo methods for estimating largedimensional integrals over the unit hypercube. In this paper, after briefly reviewing key ideas of quasiMonte Carlo methods, we give an overview of recent results, generalize them, and provide several new results, for lattice rules defined in spaces of polynomials and of formal series with coeffocients in a finite ring. We discuss basic properties, implementations, a randomized version, and quality criteria (i.e., measures of uniformity) for selecting the parameters. Two types of polynomial lattice rules are examined: dimensionwise lattices and resolutionwise lattices. These rules turn out to be special cases of digital net constructions, which we reinterpret as yet another type of lattice in a space of formal series. Our development underlines the connections between integration lattices and digital nets.
ACTIVE LEARNING IN REGRESSION, WITH APPLICATION TO STOCHASTIC DYNAMIC PROGRAMMING
"... Abstract: We study active learning as a derandomized form of sampling. We show that full derandomization is not suitable in a robust framework, propose partially derandomized samplings, and develop new active learning methods (i) in which expert knowledge is easy to integrate (ii) with a parameter f ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract: We study active learning as a derandomized form of sampling. We show that full derandomization is not suitable in a robust framework, propose partially derandomized samplings, and develop new active learning methods (i) in which expert knowledge is easy to integrate (ii) with a parameter for the exploration/exploitation dilemma (iii) less randomized than the fullrandom sampling (yet also not deterministic). Experiments are performed in the case of regression for valuefunction learning on a continuous domain. Our main results are (i) efficient partially derandomized point sets (ii) moderatederandomization theorems (iii) experimental evidence of the importance of the frontier (iv) a new regressionspecific userfriendly sampling tool lessrobust than blind samplers but that sometimes works very efficiently in large dimensions. All experiments can be reproduced by downloading the source code and running the provided command line. 1
DCMA, yet another derandomization in CovarianceMatrixAdaptation
 GECCO'07
, 2007
"... In a preliminary part of this paper, we analyze the necessity of randomness in evolution strategies. We conclude to the necessity of ”continuous”randomness, but with a much more limited use of randomness than what is commonly used in evolution strategies. We then apply these results to CMAES, a fa ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
In a preliminary part of this paper, we analyze the necessity of randomness in evolution strategies. We conclude to the necessity of ”continuous”randomness, but with a much more limited use of randomness than what is commonly used in evolution strategies. We then apply these results to CMAES, a famous evolution strategy already based on the idea of derandomization, which uses random independent Gaussian mutations. We here replace these random independent Gaussian mutations by a quasirandom sample. The modification is very easy to do, the modified algorithm is computationally more efficient and its convergence is faster in terms of the number of iterates for a given precision.