Results 1  10
of
232
A survey of maxtype recursive distributional equations
 Annals of Applied Probability 15 (2005
, 2005
"... In certain problems in a variety of applied probability settings (from probabilistic analysis of algorithms to statistical physics), the central requirement is to solve a recursive distributional equation of the form X d = g((ξi,Xi), i ≥ 1). Here(ξi) and g(·) are given and the Xi are independent cop ..."
Abstract

Cited by 86 (6 self)
 Add to MetaCart
(Show Context)
In certain problems in a variety of applied probability settings (from probabilistic analysis of algorithms to statistical physics), the central requirement is to solve a recursive distributional equation of the form X d = g((ξi,Xi), i ≥ 1). Here(ξi) and g(·) are given and the Xi are independent copies of the unknown distribution X. We survey this area, emphasizing examples where the function g(·) is essentially a “maximum ” or “minimum” function. We draw attention to the theoretical question of endogeny: inthe associated recursive tree process X i,aretheX i measurable functions of the innovations process (ξ i)? 1. Introduction. Write
Strong invariance principles for dependent random variables
 ANNALS PROBA
, 2007
"... We establish strong invariance principles for sums of stationary and ergodic processes with nearly optimal bounds. Applications to linear and some nonlinear processes are discussed. Strong laws of large numbers and laws of the iterated logarithm are also obtained under easily verifiable conditions. ..."
Abstract

Cited by 65 (9 self)
 Add to MetaCart
(Show Context)
We establish strong invariance principles for sums of stationary and ergodic processes with nearly optimal bounds. Applications to linear and some nonlinear processes are discussed. Strong laws of large numbers and laws of the iterated logarithm are also obtained under easily verifiable conditions.
Extension of Fill’s perfect rejection sampling algorithm to general chains (extended abstract
 Pages 37–52 in Monte Carlo Methods
, 2000
"... By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along th ..."
Abstract

Cited by 47 (14 self)
 Add to MetaCart
(Show Context)
By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along the way, we unearth a simple connection between the Coupling From The Past (CFTP) algorithm originated by Propp and Wilson (1996) and our extension of Fill’s algorithm. Key words and phrases. Fill’s algorithm, Markov chain Monte Carlo, perfect sampling, exact sampling, rejection sampling, interruptibility, coupling from the past, readonce coupling from the past, monotone transition rule, realizable monotonicity, stochastic monotonicity, partially ordered set, coalescence, imputation,
Gibbs sampling, exponential families and orthogonal polynomials, with discussion.
 Statist. Sci.,
, 2008
"... Abstract We give families of examples where sharp rates of convergence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical ort ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
(Show Context)
Abstract We give families of examples where sharp rates of convergence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical orthogonal polynomials as eigenfunctions.
How to Couple from the Past Using a ReadOnce Source of Randomness
, 1999
"... We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related to an idea known as PASTA (Poisson arrivals see time averages) in the operations research literature. Because the new algorithm can be run using a readonce stream of randomness, we call it readonce CFTP. The memory and time requirements of readonce CFTP are on par with the requirements of the usual form of CFTP, and for a variety of applications the requirements may be noticeably less. Some perfect sampling algorithms for point processes are based on an extension of CFTP known as coupling into and from the past; for completeness, we give a readonce version of coupling into and from the past, but it remains unpractical. For these point process applications, we give an alternative...
AIMD, Fairness and Fractal Scaling of TCP Traffic
 in Proceedings of IEEE INFOCOM
, 2002
"... We propose a natural and simple model for the joint throughput evolution of a set of TCP sessions sharing a common tail drop bottleneck router, via products of random matrices. This model allows one to predict the fluctuations of the throughput of each session, as a function of the synchronization r ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
We propose a natural and simple model for the joint throughput evolution of a set of TCP sessions sharing a common tail drop bottleneck router, via products of random matrices. This model allows one to predict the fluctuations of the throughput of each session, as a function of the synchronization rate in the bottleneck router; several other and more refined properties of the protocol are analyzed such as the instantaneous imbalance between sessions, the autocorrelation function or the performance degradation due to synchronization of losses. When aggregating traffic obtained from this model, one obtains, for certain ranges of the parameters, short time scale statistical properties that are consistent with a fractal scaling similar to what was identified on real traces using wavelets.
An overview of some stochastic stability methods
 J. Oper. Res. Soc. Japan
"... Abstract This paper presents an overview of stochastic stability methods, mostly motivated by (but not limited to) stochastic network applications. We work with stochastic recursive sequences, and, in particular, Markov chains in a general Polish state space. We discuss, and frequently compare, meth ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
(Show Context)
Abstract This paper presents an overview of stochastic stability methods, mostly motivated by (but not limited to) stochastic network applications. We work with stochastic recursive sequences, and, in particular, Markov chains in a general Polish state space. We discuss, and frequently compare, methods based on (i) Lyapunov functions, (ii) fluid limits, (iii) explicit coupling (renovating events and Harris chains), and (iv) monotonicity. We also discuss existence of stationary solutions and instability methods. The paper focuses on methods and uses examples only to exemplify the theory. Proofs are given insofar as they contain some new, unpublished, elements, or are necessary for the logical reading of this exposition.
Ergodic Theorems for Markov chains represented by Iterated Function Systems
 BULL. POLISH ACAD. SCI. MATH
, 1998
"... We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be conside ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be considered as arising from an iterated function system (IFS). A distributional ergodic theorem, including rates of convergence in the Kantorovich distance is proved for Markov chains under the condition that an IFS representation is "stochastically contractive" and "stochastically bounded". We apply this result to prove our main theorem giving upper bounds for distances between invariant probability measures for iterated function systems. We also give some examples indicating how ergodic theorems for Markov chains may be proved by finding contractive IFS representations. These ideas are applied to some Markov chains arising from iterated function systems with place dependent probabilities. Name o...
On distributional properties of perpetuities
, 2008
"... SUMMARY. We study probability distributions of convergent random series of a special structure, called perpetuities. By giving a new argument, we prove that such distributions are of pure type: degenerate, absolutely continuous, or continuously singular. We further provide necessary and sufficient c ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
SUMMARY. We study probability distributions of convergent random series of a special structure, called perpetuities. By giving a new argument, we prove that such distributions are of pure type: degenerate, absolutely continuous, or continuously singular. We further provide necessary and sufficient criteria for the finiteness of pmoments, p> 0 as well as exponential moments. In particular, a formula for the abscissa of convergence of the moment generating function is provided. The results are illustrated with a number of examples at the end of the article.
OPINION FLUCTUATIONS AND DISAGREEMENT IN SOCIAL NETWORKS
 SUBMITTED TO THE ANNALS OF APPLIED PROBABILITY
, 2010
"... We study a stochastic gossip model of continuous opinion dynamics in a society consisting of two types of agents: regular agents, who update their beliefs according to information that they receive from their social neighbors; and stubborn agents, who never update their opinions and might represent ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
We study a stochastic gossip model of continuous opinion dynamics in a society consisting of two types of agents: regular agents, who update their beliefs according to information that they receive from their social neighbors; and stubborn agents, who never update their opinions and might represent leaders, political parties or media sources attempting to influence the beliefs in the rest of the society. When the society contains stubborn agents with different opinions, opinion dynamics never lead to a consensus (among the regular agents). Instead, beliefs in the society almost surely fail to converge, and the belief of each regular agent converges in law to a nondegenerate random variable. The model thus generates longrun disagreement and continuous opinion fluctuations. The structure of the social network and the location of stubborn agents within it shape opinion dynamics. When the society is “highly fluid”, meaning that the mixing time of the random walk on the graph describing the social network is small relative to (the inverse of) the relative size of the linkages to stubborn agents, the ergodic beliefs of most of the agents concentrate around a certain common value. We also show that under additional conditions, the ergodic beliefs distribution becomes “approximately chaotic”, meaning that the variance of the aggregate belief of the society vanishes in the large population limit while individual opinions still fluctuate significantly.