Results 1  10
of
76
When are QuasiMonte Carlo Algorithms Efficient for High Dimensional Integrals?
 J. Complexity
, 1997
"... Recently quasiMonte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasiMonte Carlo algorithms does not explain this phenomenon. ..."
Abstract

Cited by 103 (19 self)
 Add to MetaCart
Recently quasiMonte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasiMonte Carlo algorithms does not explain this phenomenon. This paper presents a partial answer to why quasiMonte Carlo algorithms can work well for arbitrarily large d. It is done by identifying classes of functions for which the effect of the dimension d is negligible. These are weighted classes in which the behavior in the successive dimensions is moderated by a sequence of weights. We prove that the minimal worst case error of quasiMonte Carlo algorithms does not depend on the dimension d iff the sum of the weights is finite. We also prove that under this assumption the minimal number of function values in the worst case setting needed to reduce the initial error by " is bounded by C " \Gammap , where the exponent p 2 [1; 2], and C depends ...
Latin Supercube Sampling for Very High Dimensional Simulations
, 1997
"... This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables are grouped into subsets, and a lower dimensional QMC method is used within each subset. The QMC points are presented in random order within subsets. QMC methods have been observed to lose effectiveness in high dimensional problems. This paper shows that LSS can extend the benefits of QMC to much higher dimensions, when one can make a good grouping of input variables. Some suggestions for grouping variables are given for the motivating examples. Even a poor grouping can still be expected to do as well as LHS. The paper also extends LHS and LSS to infinite dimensional problems. The paper includes a survey of QMC methods, randomized versions of them (RQMC) and previous methods for extending Q...
Recent Advances In Randomized QuasiMonte Carlo Methods
"... We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional inte ..."
Abstract

Cited by 59 (12 self)
 Add to MetaCart
We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional integral. We review several QMC constructions and dierent randomizations that have been proposed to provide unbiased estimators and for error estimation. Randomizing QMC methods allows us to view them as variance reduction techniques. New and old results on this topic are used to explain how these methods can improve over the MC method in practice. We also discuss how this methodology can be coupled with clever transformations of the integrand in order to reduce the variance further. Additional topics included in this survey are the description of gures of merit used to measure the quality of the constructions underlying these methods, and other related techniques for multidimensional integration. 1 2 1.
Methods for the Computation of Multivariate tProbabilities
 Computing Sciences and Statistics
, 2000
"... This paper compares methods for the numerical computation of multivariate tprobabilities for hyperrectangular integration regions. Methods based on acceptancerejection, sphericalradial transformations and separationofvariables transformations are considered. Tests using randomly chosen problems ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
This paper compares methods for the numerical computation of multivariate tprobabilities for hyperrectangular integration regions. Methods based on acceptancerejection, sphericalradial transformations and separationofvariables transformations are considered. Tests using randomly chosen problems show that the most efficient numerical methods use a transformation developed by Genz (1992) for multivariate normal probabilities. These methods allow moderately accurate multivariate tprobabilities to be quickly computed for problems with as many as twenty variables. Methods for the noncentral multivariate tdistribution are also described. Key Words: multivariate tdistribution, noncentral distribution, numerical integration, statistical computation. 1 Introduction A common problem in many statistics applications is the numerical computation of the multivariate t (MVT) distribution function (see Tong, 1990) defined by T(a; b; \Sigma; ) = \Gamma( +m 2 ) \Gamma( 2 ) p j\Sigma...
Randomized Halton Sequences
 Mathematical and Computer Modelling
, 2000
"... The Halton sequence is a wellknown multidimensional low discrepancy sequence. In this paper, we propose a new method for randomizing the Halton sequence: we randomize the start point of each component of the sequence. This method combines the potential accuracy advantage of Halton sequence in mult ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
The Halton sequence is a wellknown multidimensional low discrepancy sequence. In this paper, we propose a new method for randomizing the Halton sequence: we randomize the start point of each component of the sequence. This method combines the potential accuracy advantage of Halton sequence in multidimensional integration with the practical error estimation advantage of Monte Carlo methods. Theoretically, using multiple randomized Halton sequences as a variance reduction technique we can obtain an efficiency improvement over standard Monte Carlo under rather general conditions. Numerical results show that randomized Halton sequences have better performance than not only Monte Carlo, but also randomly shifted Halton sequences and (single long) purely deterministic skipped Halton sequence. Key Words: QuasiMonte Carlo methods, low discrepancy sequences, Monte Carlo methods, numerical integration, variance reduction. AMS 1991 Subject Classification: 65C05, 65D30. This work was suppor...
A predictive performance model for superscalar processors
 In International Symposium on Microarchitecture
, 2006
"... Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical nonlinear modeling techniques to assist processor architects in making design decisions and resolving complex tradeoffs. We propose a procedure for building accurate nonlinear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building nonlinear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing nonlinear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8 % error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points. 1.
The Mean Square Discrepancy of Randomized Nets
, 1996
"... this article a formula for the mean square L ..."
A randomized quasiMonte Carlo simulation method for Markov chains
 Operations Research
, 2007
"... Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyunifor ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyuniform point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. This technique is effective in particular to obtain a lowvariance unbiased estimator of the expected total cost up to some random stopping time, when statedependent costs are paid at each step. It is generally more effective when the state space has a natural order related to the cost function. We provide numerical illustrations where the variance reduction with respect to standard Monte Carlo is substantial. The variance can be reduced by factors of several thousands in some cases. We prove bounds on the convergence rate of the worstcase error and variance for special situations. In line with what is typically observed in randomized quasiMonte Carlo contexts, our empirical results indicate much better convergence than what these bounds guarantee.
Why are highdimensional finance problems often of low effective dimension?
 SIAM J. Sci. Comput
, 2003
"... Many problems in mathematical finance can be formulated as highdimensional integrals, where the large number of dimensions arises from small time steps in time discretization and/or a large number of state variables. QuasiMonte Carlo (QMC) methods have been successfully used for approximating such ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Many problems in mathematical finance can be formulated as highdimensional integrals, where the large number of dimensions arises from small time steps in time discretization and/or a large number of state variables. QuasiMonte Carlo (QMC) methods have been successfully used for approximating such integrals. To understand this success, this paper focuses on investigating the special features of some typical highdimensional finance problems, namely option pricing and bond valuation. We provide new insight into the connection between the effective dimension and the efficiency of QMC, and present methods to analyze the dimension structure of a function. We confirm the observation of Caflisch, Morokoff and Owen that functions from finance are often of low effective dimension, in the sense that they can be well approximated by their loworder ANOVA (analysis of variance) terms, usually just the order1 and order2 terms. We explore why the effective dimension is small for many integrals from finance. By deriving explicit forms of the ANOVA terms in simple cases, we find that the importance of each dimension is naturally weighted, by certain hidden weights. These weights characterize the relative importance of different variables or groups of variables, and limit the importance of the higherorder ANOVA terms. We study the variance ratios captured by loworder ANOVA terms and their asymptotic properties as the dimension tends to infinity, and show that with the increase of dimension the lowerorder terms continue to play a significant role and the higherorder terms tend to be negligible. This provides some insight into highdimensional problems from finance and explains why QMC algorithms are efficient for problems of this kind.
On the stepbystep construction of quasiMonte Carlo rules that achieve strong tractability error bounds in weighted Sobolev spaces
 Mathematics of Computation
, 2002
"... Abstract. We develop and justify an algorithm for the construction of quasi– Monte Carlo (QMC) rules for integration in weighted Sobolev spaces; the rules so constructed are shifted rank1 lattice rules. The parameters characterising the shifted lattice rule are found “componentbycomponent”: the ( ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Abstract. We develop and justify an algorithm for the construction of quasi– Monte Carlo (QMC) rules for integration in weighted Sobolev spaces; the rules so constructed are shifted rank1 lattice rules. The parameters characterising the shifted lattice rule are found “componentbycomponent”: the (d + 1)th component of the generator vector and the shift are obtained by successive 1dimensional searches, with the previous d components kept unchanged. The rules constructed in this way are shown to achieve a strong tractability error bound in weighted Sobolev spaces. A search for npoint rules with n prime and all dimensions 1 to d requires a total cost of O(n 3 d 2)operations. Thismay be reduced to O(n 3 d) operations at the expense of O(n 2) storage. Numerical values of parameters and worstcase errors are given for dimensions up to 40 and n up to a few thousand. The worstcase errors for these rules are found to be much smaller than the theoretical bounds. 1.