Results 1  10
of
163
A survey of maxtype recursive distributional equations
 Annals of Applied Probability 15 (2005
, 2005
"... In certain problems in a variety of applied probability settings (from probabilistic analysis of algorithms to statistical physics), the central requirement is to solve a recursive distributional equation of the form X d = g((ξi,Xi), i ≥ 1). Here(ξi) and g(·) are given and the Xi are independent cop ..."
Abstract

Cited by 72 (6 self)
 Add to MetaCart
(Show Context)
In certain problems in a variety of applied probability settings (from probabilistic analysis of algorithms to statistical physics), the central requirement is to solve a recursive distributional equation of the form X d = g((ξi,Xi), i ≥ 1). Here(ξi) and g(·) are given and the Xi are independent copies of the unknown distribution X. We survey this area, emphasizing examples where the function g(·) is essentially a “maximum ” or “minimum” function. We draw attention to the theoretical question of endogeny: inthe associated recursive tree process X i,aretheX i measurable functions of the innovations process (ξ i)? 1. Introduction. Write
Extension of Fill’s perfect rejection sampling algorithm to general chains (extended abstract
 Pages 37–52 in Monte Carlo Methods
, 2000
"... By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along th ..."
Abstract

Cited by 44 (14 self)
 Add to MetaCart
(Show Context)
By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along the way, we unearth a simple connection between the Coupling From The Past (CFTP) algorithm originated by Propp and Wilson (1996) and our extension of Fill’s algorithm. Key words and phrases. Fill’s algorithm, Markov chain Monte Carlo, perfect sampling, exact sampling, rejection sampling, interruptibility, coupling from the past, readonce coupling from the past, monotone transition rule, realizable monotonicity, stochastic monotonicity, partially ordered set, coalescence, imputation,
AIMD, Fairness and Fractal Scaling of TCP Traffic
 in Proceedings of IEEE INFOCOM
, 2002
"... We propose a natural and simple model for the joint throughput evolution of a set of TCP sessions sharing a common tail drop bottleneck router, via products of random matrices. This model allows one to predict the fluctuations of the throughput of each session, as a function of the synchronization r ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
We propose a natural and simple model for the joint throughput evolution of a set of TCP sessions sharing a common tail drop bottleneck router, via products of random matrices. This model allows one to predict the fluctuations of the throughput of each session, as a function of the synchronization rate in the bottleneck router; several other and more refined properties of the protocol are analyzed such as the instantaneous imbalance between sessions, the autocorrelation function or the performance degradation due to synchronization of losses. When aggregating traffic obtained from this model, one obtains, for certain ranges of the parameters, short time scale statistical properties that are consistent with a fractal scaling similar to what was identified on real traces using wavelets.
Strong invariance principles for dependent random variables
 ANNALS PROBA
, 2007
"... We establish strong invariance principles for sums of stationary and ergodic processes with nearly optimal bounds. Applications to linear and some nonlinear processes are discussed. Strong laws of large numbers and laws of the iterated logarithm are also obtained under easily verifiable conditions. ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
We establish strong invariance principles for sums of stationary and ergodic processes with nearly optimal bounds. Applications to linear and some nonlinear processes are discussed. Strong laws of large numbers and laws of the iterated logarithm are also obtained under easily verifiable conditions.
How to Couple from the Past Using a ReadOnce Source of Randomness
, 1999
"... We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related to an idea known as PASTA (Poisson arrivals see time averages) in the operations research literature. Because the new algorithm can be run using a readonce stream of randomness, we call it readonce CFTP. The memory and time requirements of readonce CFTP are on par with the requirements of the usual form of CFTP, and for a variety of applications the requirements may be noticeably less. Some perfect sampling algorithms for point processes are based on an extension of CFTP known as coupling into and from the past; for completeness, we give a readonce version of coupling into and from the past, but it remains unpractical. For these point process applications, we give an alternative...
Gibbs sampling, exponential families and orthogonal polynomials
 Statistical Sciences
, 2008
"... Abstract. We give families of examples where sharp rates of convergence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical ort ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We give families of examples where sharp rates of convergence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical orthogonal polynomials as eigenfunctions. Key words and phrases: Gibbs sampler, running time analyses, exponential families, conjugate priors, location families, orthogonal polynomials, singular value decomposition. 1.
Ergodic Theorems for Markov chains represented by Iterated Function Systems
 BULL. POLISH ACAD. SCI. MATH
, 1998
"... We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be conside ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be considered as arising from an iterated function system (IFS). A distributional ergodic theorem, including rates of convergence in the Kantorovich distance is proved for Markov chains under the condition that an IFS representation is "stochastically contractive" and "stochastically bounded". We apply this result to prove our main theorem giving upper bounds for distances between invariant probability measures for iterated function systems. We also give some examples indicating how ergodic theorems for Markov chains may be proved by finding contractive IFS representations. These ideas are applied to some Markov chains arising from iterated function systems with place dependent probabilities. Name o...
An overview of some stochastic stability methods
 J. Oper. Res. Soc. Japan
"... Abstract This paper presents an overview of stochastic stability methods, mostly motivated by (but not limited to) stochastic network applications. We work with stochastic recursive sequences, and, in particular, Markov chains in a general Polish state space. We discuss, and frequently compare, meth ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
(Show Context)
Abstract This paper presents an overview of stochastic stability methods, mostly motivated by (but not limited to) stochastic network applications. We work with stochastic recursive sequences, and, in particular, Markov chains in a general Polish state space. We discuss, and frequently compare, methods based on (i) Lyapunov functions, (ii) fluid limits, (iii) explicit coupling (renovating events and Harris chains), and (iv) monotonicity. We also discuss existence of stationary solutions and instability methods. The paper focuses on methods and uses examples only to exemplify the theory. Proofs are given insofar as they contain some new, unpublished, elements, or are necessary for the logical reading of this exposition.
Growth and Decay of Random Fibonacci Sequences
 London Proceedings, Series A, Mathematical, Physical and Engineering Sciences
, 1999
"... Introduction In a remarkable recent paper, Viswanath (1998) has considered the large n behaviour of solutions to the `random Fibonacci recurrence' x n+1 = \Sigma x n \Sigma x n\Gamma1 ; (1.1) where the signs are chosen independently and with equal probabilities, and x 0 = x 1 = 1. Computer ex ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Introduction In a remarkable recent paper, Viswanath (1998) has considered the large n behaviour of solutions to the `random Fibonacci recurrence' x n+1 = \Sigma x n \Sigma x n\Gamma1 ; (1.1) where the signs are chosen independently and with equal probabilities, and x 0 = x 1 = 1. Computer experiments, as in figure 1, show exponential growth with n. The problem of large n behaviour of (1.1) has been mentioned at least since 1963, when Furstenberg (1963) established exponential growth with probability 1, but Viswanath's contribution represents an intriguing new development. By an ingenious application of a SternBrocot tree (Graham et al. 1994), he proved that solutions to (1.1) satisfy lim n!1 jx n j<F8.064
On the exact simulation of functionals of stationary markov chains. Linear Algebra and its Applications 386:285–310
, 2004
"... In performance evaluation domain, simulation is an alternative when numerical analysis fail. To avoid the burnin time problem, this paper presents an adaptation of the perfect simulation algorithm [10] to finite ergodic Markov chain with arbitrary structure. Simulation algorithms are deduced and p ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
(Show Context)
In performance evaluation domain, simulation is an alternative when numerical analysis fail. To avoid the burnin time problem, this paper presents an adaptation of the perfect simulation algorithm [10] to finite ergodic Markov chain with arbitrary structure. Simulation algorithms are deduced and provide samplings of functionals of the steadystate without computing the state coupling, it speeds up the algorithm by a significant factor. Based on a sparse representation of the Markov chain, the aliasing technique improves highly the complexity of the simulation. Moreover, with small adaptations, it builds a transition function algorithm that ensures coupling. Key words: Markov chain simulation, perfect simulation, steadystate analysis. 1