Results 1  10
of
241
NonUniform Random Variate Generation
, 1986
"... This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorith ..."
Abstract

Cited by 1006 (25 self)
 Add to MetaCart
This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation
, 2000
"... In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coo ..."
Abstract

Cited by 325 (15 self)
 Add to MetaCart
In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coordinates, such that given sketches C(p) and C(q) one can estimate jp \Gamma qj 1 up to a factor of (1 + ffl) with large probability. This solves the main open problem of [10]. ffl we obtain another sketch function C 0 which maps l n 1 into a normed space l m 1 (as opposed to C), such that m = m(n) is much smaller than n; to our knowledge this is the first dimensionality reduction lemma for l 1 norm ffl we give an explicit embedding of l n 2 into l n O(log n) 1 with distortion (1 + 1=n \Theta(1) ) and a nonconstructive embedding of l n 2 into l O(n) 1 with distortion (1 + ffl) such that the embedding can be represented using only O(n log 2 n) bits (as opposed to at least...
Approximations of small jumps of Levy processes with a view towards simulation
 Journal of Applied Probability
"... ..."
Comparing data streams using hamming norms (how to zero in)
, 2003
"... Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases and instead must be processed “on the fly” as they are produced. Similarly, sensor networ ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
(Show Context)
Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams and, hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalizes ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “l0 sketch ” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points.
HeavyTailed Distributions in Combinatorial Search
, 1997
"... Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or "heavy tails". W ..."
Abstract

Cited by 76 (14 self)
 Add to MetaCart
Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or "heavy tails". We will show that these distributions are best characterized by a general class of distributions that have no moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We believe this is the first finding of these distributions in a purely computational setting. We also show how random restarts can effectively eliminate heavytailed behavior, thereby dramatically improving the overall performance of a search procedure.
The Euler scheme for Lévy driven stochastic differential equations
, 1997
"... In relation with MonteCarlo methods to solve some integrodifferential equations, we study the approximation problem of IEg(XT) by IEg ( ¯ Xn T), where (Xt, 0 ≤ t ≤ T) is the solution of a stochastic differential equation governed by a Lévy process (Zt), ( ¯ Xn t) is defined by the Euler discret ..."
Abstract

Cited by 59 (2 self)
 Add to MetaCart
In relation with MonteCarlo methods to solve some integrodifferential equations, we study the approximation problem of IEg(XT) by IEg ( ¯ Xn T), where (Xt, 0 ≤ t ≤ T) is the solution of a stochastic differential equation governed by a Lévy process (Zt), ( ¯ Xn t) is defined by the Euler discretization scheme with step T n. With appropriate assumptions on g(·), we show that the error IEg(XT) − IEg ( ¯ Xn 1 T) can be expanded in powers of n if the Lévy measure of Z has finite moments of order high enough. Otherwise the rate of convergence is slower and its speed depends on the behavior of the tails of the Lévy measure. 1
On the ChambersMallowsStuck Method for Simulating Skewed Stable Random Variables.
, 1995
"... : In this note, we give a proof to the equality in law of a skewed stable variable and a nonlinear transformation of two independent uniform and exponential variables. The lack of an explicit proof of this formula has led to some inaccuracies in the literature. The Chambers et al. (1976) method of c ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
: In this note, we give a proof to the equality in law of a skewed stable variable and a nonlinear transformation of two independent uniform and exponential variables. The lack of an explicit proof of this formula has led to some inaccuracies in the literature. The Chambers et al. (1976) method of computer generation of a skewed stable random variable is based on this equality. Keywords: Stable distribution, characteristic function, random variable generation. 1 Introduction The Central Limit Theorem, which offers the fundamental justification for approximate normality, points to the importance of ffstable distributions: they are the only limiting laws of normalized sums of independent, identically distributed random variables. Gaussian distributions, the best known member of the stable family, have long been well understood and widely used in all sorts of problems. However, they do not allow for large fluctuations and are thus inadequate for modeling high variability. In the last...
Financial Modelling and Option Theory with the Truncated Levy Process; condmat 9710197
, 1997
"... In recent studies the truncated Levy process (TLP) has been shown to be very promising for the modeling of financial dynamics. In contrast to the Levy process, the TLP has finite moments and can account for both the previously observed excess kurtosis at short timescales, along with the slow converg ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
(Show Context)
In recent studies the truncated Levy process (TLP) has been shown to be very promising for the modeling of financial dynamics. In contrast to the Levy process, the TLP has finite moments and can account for both the previously observed excess kurtosis at short timescales, along with the slow convergence to Gaussian at longer timescales. I further test the truncated Levy paradigm using high frequency data from the Australian All Ordinaries share market index. I then consider, for the early Levy dominated regime, the issue of option hedging for two different hedging strategies that are in some sense optimal. These are compared with the usual delta hedging approach and found to differ significantly. I also derive the natural generalization of the BlackScholes option pricing formula when the underlying security is modeled by a geometric TLP. This generalization would not be possible without the truncation.
On the exact space complexity of sketching and streaming small norms
 In SODA
, 2010
"... We settle the 1pass space complexity of (1 ± ε)approximating the Lp norm, for real p with 1 ≤ p ≤ 2, of a lengthn vector updated in a lengthm stream with updates to its coordinates. We assume the updates are integers in the range [−M, M]. In particular, we show the space required is Θ(ε −2 log(mM ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
We settle the 1pass space complexity of (1 ± ε)approximating the Lp norm, for real p with 1 ≤ p ≤ 2, of a lengthn vector updated in a lengthm stream with updates to its coordinates. We assume the updates are integers in the range [−M, M]. In particular, we show the space required is Θ(ε −2 log(mM) + log log(n)) bits. Our result also holds for 0 < p < 1; although Lp is not a norm in this case, it remains a welldefined function. Our upper bound improves upon previous algorithms of [Indyk, JACM ’06] and [Li, SODA ’08]. This improvement comes from showing an improved derandomization of the Lp sketch of Indyk by using kwise independence for small k, as opposed to using the heavy hammer of a generic pseudorandom generator against spacebounded computation such as Nisan’s PRG. Our lower bound improves upon previous work of [AlonMatiasSzegedy, JCSS ’99] and [Woodruff, SODA ’04], and is based on showing a direct sum property for the 1way communication of the gapHamming problem. 1