Results 1  10
of
56
NonUniform Random Variate Generation
, 1986
"... Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various ..."
Abstract

Cited by 646 (21 self)
 Add to MetaCart
Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
Approximate aggregation techniques for sensor databases
 In ICDE
, 2004
"... In the emerging area of sensorbased systems, a significant challenge is to develop scalable, faulttolerant methods to extract useful information from the data the sensors collect. An approach to this data management problem is the use of sensor database systems, exemplified by TinyDB and Cougar, w ..."
Abstract

Cited by 241 (5 self)
 Add to MetaCart
In the emerging area of sensorbased systems, a significant challenge is to develop scalable, faulttolerant methods to extract useful information from the data the sensors collect. An approach to this data management problem is the use of sensor database systems, exemplified by TinyDB and Cougar, which allow users to perform aggregation queries such as MIN, COUNT and AVG on a sensor network. Due to power and range constraints, centralized approaches are generally impractical, so most systems use innetwork aggregation to reduce network traffic. Also, aggregation strategies must provide faulttolerance to address the issues of packet loss and node failures inherent in such a system. An unfortunate consequence of standard methods is that they typically introduce duplicate values, which must be accounted for to compute aggregates correctly. Another consequence of loss in the network is that exact aggregation is not possible in general. With this in mind, we investigate the use of approximate innetwork aggregation using small sketches. Our contributions are as follows: 1) we generalize well known duplicateinsensitive sketches for approximating COUNT to handle SUM (and by extension, AVG and other aggregates), 2) we present and analyze methods for using sketches to produce accurate results with low communication and computation overhead (even on lowpowered CPUs with little storage and no floating point operations), and 3) we present an extensive experimental validation of our methods. 1
Random number generation
"... Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables dis ..."
Abstract

Cited by 139 (30 self)
 Add to MetaCart
Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables distributed uniformly over the interval
Testing that distributions are close
 In IEEE Symposium on Foundations of Computer Science
, 2000
"... Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions ..."
Abstract

Cited by 79 (16 self)
 Add to MetaCart
Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases ɛ when the distance between the distributions is small (less than max ( 2 32 3 √ n, ɛ 4 √)) or large (more n than ɛ) in L1distance. We also give an Ω(n 2/3 ɛ −2/3) lower bound. Our algorithm has applications to the problem of checking whether a given Markov process is rapidly mixing. We develop sublinear algorithms for this problem as well.
The CrossEntropy Method for Combinatorial and Continuous Optimization
, 1999
"... We present a new and fast method, called the crossentropy method, for finding the optimal solution of combinatorial and continuous nonconvex optimization problems with convex bounded domains. To find the optimal solution we solve a sequence of simple auxiliary smooth optimization problems based on ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
We present a new and fast method, called the crossentropy method, for finding the optimal solution of combinatorial and continuous nonconvex optimization problems with convex bounded domains. To find the optimal solution we solve a sequence of simple auxiliary smooth optimization problems based on KullbackLeibler crossentropy, importance sampling, Markov chain and Boltzmann distribution. We use importance sampling as an important ingredient for adaptive adjustment of the temperature in the Boltzmann distribution and use KullbackLeibler crossentropy to find the optimal solution. In fact, we use the mode of a unimodal importance sampling distribution, like the mode of beta distribution, as an estimate of the optimal solution for continuous optimization and Markov chains approach for combinatorial optimization. In the later case we show almost surely convergence of our algorithm to the optimal solution. Supporting numerical results for both continuous and combinatorial optimization problems are given as well. Our empirical studies suggest that the crossentropy method has polynomial in the size of the problem running time complexity.
OrderPreserving Symmetric Encryption
"... We initiate the cryptographic study of orderpreserving symmetric encryption (OPE), a primitive suggested in the database community by Agrawal et al. (SIGMOD ’04) for allowing efficient range queries on encrypted data. Interestingly, we first show that a straightforward relaxation of standard securi ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
We initiate the cryptographic study of orderpreserving symmetric encryption (OPE), a primitive suggested in the database community by Agrawal et al. (SIGMOD ’04) for allowing efficient range queries on encrypted data. Interestingly, we first show that a straightforward relaxation of standard security notions for encryption such as indistinguishability against chosenplaintext attack (INDCPA) is unachievable by a practical OPE scheme. Instead, we propose a security notion in the spirit of pseudorandom functions (PRFs) and related primitives asking that an OPE scheme look “asrandomaspossible ” subject to the orderpreserving constraint. We then design an efficient OPE scheme and prove its security under our notion based on pseudorandomness of an underlying blockcipher. Our construction is based on a natural relation we uncover between a random orderpreserving function and the hypergeometric probability distribution. In particular, it makes blackbox use of an efficient sampling algorithm for the latter. 1
Nonuniform random number generation through piecewise linear approximations
 IET Computers and Digital Techniques
"... This paper presents a hardware architecture for nonuniform random number generation, which allows the generator’s distribution to be modified at runtime without reconfiguration. The architecture is based on a piecewise linear approximation, using just one table lookup, one comparison and one subtr ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
This paper presents a hardware architecture for nonuniform random number generation, which allows the generator’s distribution to be modified at runtime without reconfiguration. The architecture is based on a piecewise linear approximation, using just one table lookup, one comparison and one subtract operation to map from a uniform source to an arbitrary nonuniform distribution, resulting in very low area utilisation and high speeds. Customisation of the distribution is fully automatic, requiring less than a second of CPU time to approximate a new distribution, and around 1000 cycles to switch distributions at runtime. Comparison with Gaussian specific generators show that the new architecture uses less than half the resources, provides a higher sample rate, and retains statistical quality for up to 50 billion samples, but can also generate other distributions. 1.
A linear algorithm for generating random numbers with a given distribution
 Software Engineering, IEEE Transactions on
, 1991
"... AbstractLet [ be a random variable over a finite set with an arbitrary probability distribution. In this paper we make improvements to a fast method of generating sample values for ( in constant time. Index TermsRandom, randomnumber, randomvariable. I. ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
AbstractLet [ be a random variable over a finite set with an arbitrary probability distribution. In this paper we make improvements to a fast method of generating sample values for ( in constant time. Index TermsRandom, randomnumber, randomvariable. I.
Gaussian random number generators
 ACM Computing Surveys
, 2007
"... Rapid generation of high quality Gaussian random numbers is a key capability for simulations across a wide range of disciplines. Advances in computing have brought the power to conduct simulations with very large numbers of random numbers and with it, the challenge of meeting increasingly stringent ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Rapid generation of high quality Gaussian random numbers is a key capability for simulations across a wide range of disciplines. Advances in computing have brought the power to conduct simulations with very large numbers of random numbers and with it, the challenge of meeting increasingly stringent requirements on the quality of Gaussian random number generators (GRNG). This article describes the algorithms underlying various GRNGs, compares their computational requirements, and examines the quality of the random numbers with emphasis on the behaviour in the tail region of the Gaussian probability density function.