Results 11  20
of
591
QuasiRandom Sequences and Their Discrepancies
 SIAM J. Sci. Comput
, 1994
"... Quasirandom (also called low discrepancy) sequences are a deterministic alternative to random sequences for use in Monte Carlo methods, such as integration and particle simulations of transport processes. The error in uniformity for such a sequence of N points in the sdimensional unit cube is meas ..."
Abstract

Cited by 73 (6 self)
 Add to MetaCart
Quasirandom (also called low discrepancy) sequences are a deterministic alternative to random sequences for use in Monte Carlo methods, such as integration and particle simulations of transport processes. The error in uniformity for such a sequence of N points in the sdimensional unit cube is measured by its discrepancy, which is of size (log N) s N \Gamma1 for large N , as opposed to discrepancy of size (log log N) 1=2 N \Gamma1=2 for a random sequence (i.e. for almost any randomlychosen sequence). Several types of discrepancy, one of which is new, are defined and analyzed. A critical discussion of the theoretical bounds on these discrepancies is presented. Computations of discrepancy are presented for a wide choice of dimension s, number of points N and different quasirandom sequences. In particular for moderate or large s, there is an intermediate regime in which the discrepancy of a quasirandom sequence is almost exactly the same as that of a randomly chosen sequence...
Latin Supercube Sampling for Very High Dimensional Simulations
, 1997
"... This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables are grouped into subsets, and a lower dimensional QMC method is used within each subset. The QMC points are presented in random order within subsets. QMC methods have been observed to lose effectiveness in high dimensional problems. This paper shows that LSS can extend the benefits of QMC to much higher dimensions, when one can make a good grouping of input variables. Some suggestions for grouping variables are given for the motivating examples. Even a poor grouping can still be expected to do as well as LHS. The paper also extends LHS and LSS to infinite dimensional problems. The paper includes a survey of QMC methods, randomized versions of them (RQMC) and previous methods for extending Q...
Valuation of Mortgage Backed Securities Using Brownian Bridges to Reduce Effective Dimension
, 1997
"... The quasiMonte Carlo method for financial valuation and other integration problems has error bounds of size O((log N) k N \Gamma1 ), or even O((log N) k N \Gamma3=2 ), which suggests significantly better performance than the error size O(N \Gamma1=2 ) for standard Monte Carlo. But in hig ..."
Abstract

Cited by 68 (13 self)
 Add to MetaCart
The quasiMonte Carlo method for financial valuation and other integration problems has error bounds of size O((log N) k N \Gamma1 ), or even O((log N) k N \Gamma3=2 ), which suggests significantly better performance than the error size O(N \Gamma1=2 ) for standard Monte Carlo. But in high dimensional problems this benefit might not appear at feasible sample sizes. Substantial improvements from quasiMonte Carlo integration have, however, been reported for problems such as the valuation of mortgagebacked securities, in dimensions as high as 360. We believe that this is due to a lower effective dimension of the integrand in those cases. This paper defines the effective dimension and shows in examples how the effective dimension may be reduced by using a Brownian bridge representation. 1 Introduction Simulation is often the only effective numerical method for the accurate valuation of securities whose value depends on the whole trajectory of interest Mathematics Departmen...
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Generating random elements of a finite group
 Comm. Algebra
, 1995
"... We present a “practical ” algorithm to construct random elements of a finite group. We analyse its theoretical behaviour and prove that asymptotically it produces uniformly distributed tuples of elements. We discuss tests to assess its effectiveness and use these to decide when its results are accep ..."
Abstract

Cited by 67 (10 self)
 Add to MetaCart
We present a “practical ” algorithm to construct random elements of a finite group. We analyse its theoretical behaviour and prove that asymptotically it produces uniformly distributed tuples of elements. We discuss tests to assess its effectiveness and use these to decide when its results are acceptable for some matrix groups. 1 1
The Insecurity of the Digital Signature Algorithm with Partially Known Nonces
 Journal of Cryptology
, 2000
"... . We present a polynomialtime algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable ass ..."
Abstract

Cited by 65 (16 self)
 Add to MetaCart
. We present a polynomialtime algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log 1=2 q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of HowgraveGraham and Smart who recently introduced that topic. Our attack is based on a connection with the hidden number problem (HNP) introduced at Crypto '96 by Boneh and Venkatesan in order to study the bitsecurity of the DiffieHellman key exchange. The HNP consists, given a prime number q, of recovering a number ff 2 IFq such that for many known random t 2 IFq ...
Recent Advances In Randomized QuasiMonte Carlo Methods
"... We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional inte ..."
Abstract

Cited by 59 (12 self)
 Add to MetaCart
We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional integral. We review several QMC constructions and dierent randomizations that have been proposed to provide unbiased estimators and for error estimation. Randomizing QMC methods allows us to view them as variance reduction techniques. New and old results on this topic are used to explain how these methods can improve over the MC method in practice. We also discuss how this methodology can be coupled with clever transformations of the integrand in order to reduce the variance further. Additional topics included in this survey are the description of gures of merit used to measure the quality of the constructions underlying these methods, and other related techniques for multidimensional integration. 1 2 1.
Interactive Sampling and Rendering for Complex and Procedural Geometry
, 2001
"... We present a new sampling method for procedural and complex geometries, which allows interactive pointbased modeling and rendering of such scenes. For a variety of scenes, objectspace point sets can be generated rapidly, resulting in a sufficiently dense sampling of the final image. We present ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
We present a new sampling method for procedural and complex geometries, which allows interactive pointbased modeling and rendering of such scenes. For a variety of scenes, objectspace point sets can be generated rapidly, resulting in a sufficiently dense sampling of the final image. We present an integrated approach that exploits the simplicity of the point primitive. For procedural objects a hierarchical sampling scheme is presented that adapts sample densities locally according to the projected size in the image. Dynamic procedural objects and interactive user manipulation thus become possible. The same scheme is also applied to onthefly generation and rendering of terrains, and enables the use of an efficient occlusion culling algorithm. Furthermore, by using points the system enables interactive rendering and simple modification of complex objects (e.g., trees). For display, hardwareaccelerated 3D point rendering is used, but our sampling method can be used by any other pointrendering approach.
Tables Of Linear Congruential Generators Of Different Sizes And Good Lattice Structure
, 1999
"... . We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8; 9; : : : ; 64; 127; 128, we take as a modulus m the largest prime smaller than 2 ` , and provide a list of multipliers a ..."
Abstract

Cited by 49 (16 self)
 Add to MetaCart
. We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8; 9; : : : ; 64; 127; 128, we take as a modulus m the largest prime smaller than 2 ` , and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for poweroftwo moduli m = 2 ` , for multiplicative and nonmultiplicative LCGs. 1. Introduction A multiplicative linear congruential generator (MLCG) is defined by a recurrence of the form xn = axn\Gamma1 mod m (1) where m and a are integers called the modulus and the multiplier , respectively, and xn 2 Zm = f0; : : : ; m \Gamma 1g is the state at step n. To obtain a sequence of "random numbers" in the interval [0; 1), one can define the output at step n as un = xn=m: (2) We use the expression "the MLCG (m; a)" to denote a sequence that obeys (1) and (2). Th...
Explicit Cost Bounds of Algorithms for Multivariate Tensor Product Problems
 J. Complexity
, 1994
"... We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c( ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c(d) + 2) fi 1 ` fi 2 + fi 3 ln 1=" d \Gamma 1 ' fi 4 (d\Gamma1) ` 1 " ' fi 5 : Here c(d) is the cost of one function evaluation (or one linear functional evaluation), and fi i 's do not depend on d; they are determined by the properties of the problem for d = 1. For certain tensor product problems, these cost bounds do not exceed c(d) K " \Gammap for some numbers K and p, both independent of d. We apply these general estimates to certain integration and approximation problems in the worst and average case settings. We also obtain an upper bound, which is independent of d, for the number, n("; d), of points for which discrepancy (with unequal weights) is at most ", n("; d) 7:26 ...