Results 1  10
of
133
NonUniform Random Variate Generation
, 1986
"... Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various ..."
Abstract

Cited by 632 (21 self)
 Add to MetaCart
Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation
, 2000
"... In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coo ..."
Abstract

Cited by 261 (15 self)
 Add to MetaCart
In this paper we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular: ffl we show how to maintain (using only O(log n=ffl 2 ) words of storage) a sketch C(p) of a point p 2 l n 1 under dynamic updates of its coordinates, such that given sketches C(p) and C(q) one can estimate jp \Gamma qj 1 up to a factor of (1 + ffl) with large probability. This solves the main open problem of [10]. ffl we obtain another sketch function C 0 which maps l n 1 into a normed space l m 1 (as opposed to C), such that m = m(n) is much smaller than n; to our knowledge this is the first dimensionality reduction lemma for l 1 norm ffl we give an explicit embedding of l n 2 into l n O(log n) 1 with distortion (1 + 1=n \Theta(1) ) and a nonconstructive embedding of l n 2 into l O(n) 1 with distortion (1 + ffl) such that the embedding can be represented using only O(n log 2 n) bits (as opposed to at least...
Comparing data streams using hamming norms (how to zero in)
, 2003
"... Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases and instead must be processed “on the fly” as they are produced. Similarly, sensor networ ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams and, hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalizes ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “l0 sketch ” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points.
HeavyTailed Distributions in Combinatorial Search
, 1997
"... Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or "heavy tails". We will sho ..."
Abstract

Cited by 63 (12 self)
 Add to MetaCart
Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or "heavy tails". We will show that these distributions are best characterized by a general class of distributions that have no moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We believe this is the first finding of these distributions in a purely computational setting. We also show how random restarts can effectively eliminate heavytailed behavior, thereby dramatically improving the overall performance of a search procedure.
Approximations of small jumps of Lévy processes with a view towards simulation
, 2000
"... Let X = (X(t) : t 0) be a Lévy process and X the compensated sum of jumps not exceeding in absolute value, 2 ( ) = Var(X (1)). In simulation, X X is easily generated as the sum of a Brownian term and a compound Poisson one, and we investigate here when X = ( ) can be approximated by another Brownian ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
Let X = (X(t) : t 0) be a Lévy process and X the compensated sum of jumps not exceeding in absolute value, 2 ( ) = Var(X (1)). In simulation, X X is easily generated as the sum of a Brownian term and a compound Poisson one, and we investigate here when X = ( ) can be approximated by another Brownian term. A necessary and sufficient condition in terms of ( ) is given, and it is shown that when the condition fails, the behaviour of X = ( ) can be quite intricate. This condition is also related to the decay of terms in series expansions. We further discuss error rates in terms of BerryEsseen bounds and Edgeworth approximations.
The Euler scheme for Lévy driven stochastic differential equations
, 1997
"... In relation with MonteCarlo methods to solve some integrodifferential equations, we study the approximation problem of IEg(XT) by IEg ( ¯ Xn T), where (Xt, 0 ≤ t ≤ T) is the solution of a stochastic differential equation governed by a Lévy process (Zt), ( ¯ Xn t) is defined by the Euler discret ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
In relation with MonteCarlo methods to solve some integrodifferential equations, we study the approximation problem of IEg(XT) by IEg ( ¯ Xn T), where (Xt, 0 ≤ t ≤ T) is the solution of a stochastic differential equation governed by a Lévy process (Zt), ( ¯ Xn t) is defined by the Euler discretization scheme with step T n. With appropriate assumptions on g(·), we show that the error IEg(XT) − IEg ( ¯ Xn 1 T) can be expanded in powers of n if the Lévy measure of Z has finite moments of order high enough. Otherwise the rate of convergence is slower and its speed depends on the behavior of the tails of the Lévy measure. 1
On the ChambersMallowsStuck Method for Simulating Skewed Stable Random Variables.
, 1995
"... : In this note, we give a proof to the equality in law of a skewed stable variable and a nonlinear transformation of two independent uniform and exponential variables. The lack of an explicit proof of this formula has led to some inaccuracies in the literature. The Chambers et al. (1976) method of c ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
: In this note, we give a proof to the equality in law of a skewed stable variable and a nonlinear transformation of two independent uniform and exponential variables. The lack of an explicit proof of this formula has led to some inaccuracies in the literature. The Chambers et al. (1976) method of computer generation of a skewed stable random variable is based on this equality. Keywords: Stable distribution, characteristic function, random variable generation. 1 Introduction The Central Limit Theorem, which offers the fundamental justification for approximate normality, points to the importance of ffstable distributions: they are the only limiting laws of normalized sums of independent, identically distributed random variables. Gaussian distributions, the best known member of the stable family, have long been well understood and widely used in all sorts of problems. However, they do not allow for large fluctuations and are thus inadequate for modeling high variability. In the last...
Financial Modelling and Option Theory with the Truncated Levy Process; condmat 9710197
, 1997
"... In recent studies the truncated Levy process (TLP) has been shown to be very promising for the modeling of financial dynamics. In contrast to the Levy process, the TLP has finite moments and can account for both the previously observed excess kurtosis at short timescales, along with the slow converg ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
In recent studies the truncated Levy process (TLP) has been shown to be very promising for the modeling of financial dynamics. In contrast to the Levy process, the TLP has finite moments and can account for both the previously observed excess kurtosis at short timescales, along with the slow convergence to Gaussian at longer timescales. I further test the truncated Levy paradigm using high frequency data from the Australian All Ordinaries share market index. I then consider, for the early Levy dominated regime, the issue of option hedging for two different hedging strategies that are in some sense optimal. These are compared with the usual delta hedging approach and found to differ significantly. I also derive the natural generalization of the BlackScholes option pricing formula when the underlying security is modeled by a geometric TLP. This generalization would not be possible without the truncation.
Estimating dominance norms of multiple data streams
 in Proceedings of the 11th European Symposium on Algorithms (ESA
, 2003
"... Abstract. There is much focus in the algorithms and database communities on designing tools to manage and mine data streams. Typically, data streams consist of multiple signals. Formally, a stream of multiple signals is (i, ai,j) where i’s correspond to the domain, j’s index the different signals an ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Abstract. There is much focus in the algorithms and database communities on designing tools to manage and mine data streams. Typically, data streams consist of multiple signals. Formally, a stream of multiple signals is (i, ai,j) where i’s correspond to the domain, j’s index the different signals and ai,j ≥ 0 give the value of the jth signal at point i. We study the problem of finding norms that are cumulative of the multiple signals in the data stream. For example, consider the maxdominance norm, defined as i maxj{ai,j}. It may be thought as estimating the norm of the “upper envelope ” of the multiple signals, or alternatively, as estimating the norm of the “marginal ” distribution of tabular data streams. It is used in applications to estimate the “worst case influence” of multiple processes,for example in IP traffic analysis, electrical grid monitoring and financial domain. In addition, it is a natural measure, generalizing the union of data streams or counting distinct elements in data streams. We present the first known data stream algorithms for estimating maxdominance of multiple signals. In particular, we use workspace and timeperitem that are both sublinear (in fact, polylogarithmic) in the input size. In contrast other notions of dominance on streams a, b — mindominance ( i minj{ai,j}), countdominance ({iai> bi}) or relativedominance ( i ai / max{1, bi} ) — are all impossible to estimate accurately with sublinear space. 1