Results 1 
7 of
7
A randomized quasiMonte Carlo simulation method for Markov chains
 Operations Research
, 2007
"... Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyunifor ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyuniform point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. This technique is effective in particular to obtain a lowvariance unbiased estimator of the expected total cost up to some random stopping time, when statedependent costs are paid at each step. It is generally more effective when the state space has a natural order related to the cost function. We provide numerical illustrations where the variance reduction with respect to standard Monte Carlo is substantial. The variance can be reduced by factors of several thousands in some cases. We prove bounds on the convergence rate of the worstcase error and variance for special situations. In line with what is typically observed in randomized quasiMonte Carlo contexts, our empirical results indicate much better convergence than what these bounds guarantee.
Random search for hyperparameter optimization
 In: Journal of Machine Learning Research
"... Grid search and manual search are the most widely used strategies for hyperparameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyperparameter optimization than trials on a grid. Empirical evidence comes from a comparison with a ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Grid search and manual search are the most widely used strategies for hyperparameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyperparameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyperparameters to validation set performance reveals that for most data sets only a few of the hyperparameters really matter, but that different hyperparameters are important on different data sets. This phenomenon makes
Fast Generation of Randomized LowDiscrepancy Point Sets
, 2001
"... We introduce two novel techniques for speeding up the generation of digital (t,s)sequences. Based on these results a new algorithm for the construction of Owen's randomly permuted (t,s)sequences is developed and analyzed. An implementation is available at http://www.mcqmc.org/Software.html. ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We introduce two novel techniques for speeding up the generation of digital (t,s)sequences. Based on these results a new algorithm for the construction of Owen's randomly permuted (t,s)sequences is developed and analyzed. An implementation is available at http://www.mcqmc.org/Software.html.
SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR
, 2005
"... for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. This document and the software described in it are furnished under license and may only be used or copied in accordance with the terms of the license. No license, e ..."
Abstract
 Add to MetaCart
for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. This document and the software described in it are furnished under license and may only be used or copied in accordance with the terms of the license. No license, express or implied, by estoppel or otherwise, to any intellectual property
3.0 Documents Intel Math Kernel Library release 6.1. 07/03 4.0 Documents Intel Math Kernel Library release 7.0 Beta. 11/03 5.0 Documents Intel Math Kernel Library release 7.0 Gold. 04/04 6.0 Documents Intel Math Kernel Library release 7.0.1. 07/04 7.0 Doc
"... The information in this document is subject to change without notice and Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. This document and the software des ..."
Abstract
 Add to MetaCart
The information in this document is subject to change without notice and Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. This document and the software described in it are furnished under license and may only be used or copied in accordance with the terms of the license. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. The information in this document is provided in connection with Intel products and should not be construed as a commitment by Intel Corporation.
Parallelisation Techniques for Random Number Generators
"... In this chapter, we discuss the parallelisation of three very popular random number generators. In each case, the random number sequence which is generated is identical to that produced on a CPU by the standard sequential algorithm. The key to the parallelisation is that each CUDA thread block gener ..."
Abstract
 Add to MetaCart
In this chapter, we discuss the parallelisation of three very popular random number generators. In each case, the random number sequence which is generated is identical to that produced on a CPU by the standard sequential algorithm. The key to the parallelisation is that each CUDA thread block generates a particular block of numbers within the original sequence, and to do this it needs an efficient skipahead algorithm to jump to the start of its block. Although the general approach is the same in the three cases, there are significant differences in the details of the implementation due to differences in the size of the state information required by each generator. This is perhaps the point of most general interest, the way in which consideration of the number of registers required, the details of data dependency in advancing the state, and the desire for memory coalescence in storing the output lead to different implementations in the three cases. 1