Results 1  10
of
35
The FourierSeries Method For Inverting Transforms Of Probability Distributions
, 1991
"... This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remar ..."
Abstract

Cited by 203 (52 self)
 Add to MetaCart
This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remarkably easy to use, requiring programs of less than fifty lines. The Fourierseries method can be interpreted as numerically integrating a standard inversion integral by means of the trapezoidal rule. The same formula is obtained by using the Fourier series of an associated periodic function constructed by aliasing; this explains the name of the method. This Fourier analysis applies to the inversion problem because the Fourier coefficients are just values of the transform. The mathematical centerpiece of the Fourierseries method is the Poisson summation formula, which identifies the discretization error associated with the trapezoidal rule and thus helps bound it. The greatest difficulty is approximately calculating the infinite series obtained from the inversion integral. Within this framework, lattice cdf's can be calculated from generating functions by finite sums without truncation. For other cdf's, an appropriate truncation of the infinite series can be determined from the transform based on estimates or bounds. For Laplace transforms, the numerical integration can be made to produce a nearly alternating series, so that the convergence can be accelerated by techniques such as Euler summation. Alternatively, the cdf can be perturbed slightly by convolution smoothing or windowing to produce a truncation error bound independent of the original cdf. Although error bounds can be determined, an effective approach is to use two different methods without elaborate error analysis. For this...
Zitković, “Optimal consumption from investment and random endowment in incomplete semimartingale markets
 Ann. Probab
, 2003
"... Abstract. We consider the problem of maximizing expected utility from consumption in a constrained incomplete semimartingale market with a random endowment process, and establish a general existence and uniqueness result using techniques from convex duality. The notion of asymptotic elasticity of Kr ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of maximizing expected utility from consumption in a constrained incomplete semimartingale market with a random endowment process, and establish a general existence and uniqueness result using techniques from convex duality. The notion of asymptotic elasticity of Kramkov and Schachermayer is extended to the timedependent case. By imposing no smoothness requirements on the utility function in the temporal argument, we can treat both pure consumption and combined consumption/terminal wealth problems, in a common framework. To make the duality approach possible, we provide a detailed characterization of the enlarged dual domain which is reminiscent of the enlargement of L1 to its topological bidual (L∞) ∗ , a space of finitelyadditive measures. As an application, we treat the case of a constrained Itôprocess marketmodel. 1.
An isomorphism theorem for random interlacements
 Electron. Commun. Probab
, 2012
"... We consider continuoustime random interlacements on a transient weighted graph. We prove an identity in law relating the field of occupation times of random interlacements at level u to the Gaussian free field on the weighted graph. This identity is closely linked to the generalized second RayKnig ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
We consider continuoustime random interlacements on a transient weighted graph. We prove an identity in law relating the field of occupation times of random interlacements at level u to the Gaussian free field on the weighted graph. This identity is closely linked to the generalized second RayKnight theorem of [2], [4], and uniquely determines the law of occupation times of random interlacements at level u.
A CHARACTERIZATION OF THE SET OF FIXED POINTS OF THE QUICKSORT TRANSFORMATION
, 2000
"... The limiting distribution µ of the normalized number of key comparisons required by the Quicksort sorting algorithm is known to be the unique fixed point of a certain distributional transformation T—unique, that is, subject to the constraints of zero mean and finite variance. We show that a distribu ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The limiting distribution µ of the normalized number of key comparisons required by the Quicksort sorting algorithm is known to be the unique fixed point of a certain distributional transformation T—unique, that is, subject to the constraints of zero mean and finite variance. We show that a distribution is a fixed point of T if and only if it is the convolution of µ with a Cauchy distribution of arbitrary center and scale. In particular, therefore, µ is the unique fixed point of T having zero mean.
Late points for random walks in two dimensions
, 2005
"... Abstract. Let Tn(x) denote the time of first visit of a point x on the lattice torus Z 2 n = Z 2 /nZ 2 by the simple random walk. The size of the set of α, nlate points Ln(α) = {x ∈ Z 2 n: Tn(x) ≥ α 4 π (nlog n)2} is approximately n 2(1−α) , for α ∈ (0,1) (Ln(α) is empty if α> 1 and n is large ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Let Tn(x) denote the time of first visit of a point x on the lattice torus Z 2 n = Z 2 /nZ 2 by the simple random walk. The size of the set of α, nlate points Ln(α) = {x ∈ Z 2 n: Tn(x) ≥ α 4 π (nlog n)2} is approximately n 2(1−α) , for α ∈ (0,1) (Ln(α) is empty if α> 1 and n is large enough). These sets have interesting clustering and fractal properties: we show that for β ∈ (0,1) a disc of radius n β centered at nonrandom x typically contains about n 2β(1−α/β2) points from Ln(α) (and is empty if β < √ α), whereas choosing the center x of the disc uniformly in Ln(α) boosts the typical number α, nlate points in it to n 2β(1−α). We also estimate the typical number of pairs of α, nlate points within distance n β of each other; this typical number can be significantly smaller than the expected number of such pairs, calculated by Brummelhuis and Hilhorst (1991). On the other hand, our results show that the number of ordered pairs of late points within distance n β of each other, is larger than what one might predict by multiplying the total number of late points by the number of late points in a disc of radius n β centered at a typical late point. 1.
A Nonstationary OfferedLoad Model for Packet Networks
, 1998
"... Motivated by the desire to model complex features of network traffic revealed in traffic measurements, such as heavytail probability distributions, longrange dependence, self similarity and nonstationarity, we propose a nonstationary offeredload model, in which connections of multiple types arriv ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Motivated by the desire to model complex features of network traffic revealed in traffic measurements, such as heavytail probability distributions, longrange dependence, self similarity and nonstationarity, we propose a nonstationary offeredload model, in which connections of multiple types arrive according to independent nonhomogeneous Poisson processes, and general bandwidth stochastic processes describe the individual user bandwidth requirements at multiple links of a communication network during their connections. For example, an individual bandwidth process may be an onoff process where the on and off times have general (possibly heavytail) distributions. We obtain expressions for the moment generating function, mean and variance of the total required bandwidth of all customers on each link at any designated time. We suggest making decisions based on the probability that demand will exceed supply, or other designated target level, at each time of interest, using (i) numerical...
Search and Knightian uncertainty
, 2001
"... Suppose that “uncertainty ” about labor market conditions has increased. Does this change induce an unemployed worker to search longer, or shorter? This paper shows that the answer is drastically different depending on whether an increase in “uncertainty ” is an increase in risk or that in true unce ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Suppose that “uncertainty ” about labor market conditions has increased. Does this change induce an unemployed worker to search longer, or shorter? This paper shows that the answer is drastically different depending on whether an increase in “uncertainty ” is an increase in risk or that in true uncertainty in the sense of Frank Knight. We show in a general framework that, while an increase in risk (the meanpreserving spread of the wage distribution that the worker thinks she faces) increases the reservation wage, an increase in the Knightian uncertainty (a decrease in her confidence about the wage distribution) reduces it. We are grateful to seminar participants at Western Ontario and SUNYBuffalo for their helpful comments. The
On the Use of Direct Search Methods for Stochastic Optimization
 Rice University, Department of
, 2000
"... We examine the conventional wisdom that commends the use of direct search methods in the presence of random noise. To do so, we introduce new formulations of stochastic optimization and direct search. These formulations suggest a natural strategy for constructing globally convergent direct search ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
We examine the conventional wisdom that commends the use of direct search methods in the presence of random noise. To do so, we introduce new formulations of stochastic optimization and direct search. These formulations suggest a natural strategy for constructing globally convergent direct search algorithms for stochastic optimization by controlling the error rates of the ordering decisions on which direct search depends. This strategy is successfully applied to the class of generalized pattern search methods. However, a great deal of sampling is required to guarantee convergence with probability one. Contents 1 Introduction 2 2 Stochastic Optimization 2 3 Direct Search 5 3.1 The Deterministic Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 The Stochastic Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4 Convergence Theory 7 5 Pattern Search 9 5.1 Numerical Optimization . . . . . . . . . . . . . . . . . . . . . . . ....
Cumulants of the maximum of the Gaussian random walk. Stochastic Processes and Their Applications 117
, 2007
"... Abstract. Let X1, X2,... be independent variables, each having a normal distribution ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Let X1, X2,... be independent variables, each having a normal distribution
Optimal exploitation of renewable resources under uncertainty and the extinction of species, Economic Theory 28
, 2006
"... Under a minimal set of assumptions, the paper identifies conditions on the transition function of a Markov process leading to the following three scenarios: extinction, conservation, and the existence of a safe standard of conservation. These conditions are used to obtain restrictions on a framewo ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Under a minimal set of assumptions, the paper identifies conditions on the transition function of a Markov process leading to the following three scenarios: extinction, conservation, and the existence of a safe standard of conservation. These conditions are used to obtain restrictions on a framework of optimal exploitation of a renewable resource, under which the above three scenarios would occur. The biological growth function is allowed to be nonconcave, and is subject to a random environmental shock, thereby making the results suitable for applications in a wide variety of models in renewable resource management.