Results 1  10
of
60
Iterated random functions
 SIAM Review
, 1999
"... Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys ..."
Abstract

Cited by 131 (1 self)
 Add to MetaCart
Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys the field and presents some new examples. There is a simple unifying idea: the iterates of random Lipschitz functions converge if the functions are contracting on the average. 1. Introduction. The
Pathological Outcomes of Observational Learning
 ECONOMETRICA
, 1999
"... This paper explores how Bayesrational individuals learn sequentially from the discrete actions of others. Unlike earlier informational herding papers, we admit heterogeneous preferences. Not only may typespecific `herds' eventually arise, but a new robust possibility emerges: confounded learning. ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
This paper explores how Bayesrational individuals learn sequentially from the discrete actions of others. Unlike earlier informational herding papers, we admit heterogeneous preferences. Not only may typespecific `herds' eventually arise, but a new robust possibility emerges: confounded learning. Beliefs may converge to a limit point where history oers no decisive lessons for anyone, and each type's actions forever nontrivially split between two actions. To verify that our identied limit outcomes do arise, we exploit the Markovmartingale character of beliefs. Learning dynamics are stochastically stable near a fixed point in many Bayesian learning models like this one.
Finding Chaos in Noisy Systems
, 1991
"... In the past twenty years there has been much interest in the physical and biological sciences in nonlinear dynamical systems that appear to have random, unpredictable behavior. One important parameter of a dynamic system is the dominant Lyapunov exponent (LE). When the behavior of the system is comp ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
In the past twenty years there has been much interest in the physical and biological sciences in nonlinear dynamical systems that appear to have random, unpredictable behavior. One important parameter of a dynamic system is the dominant Lyapunov exponent (LE). When the behavior of the system is compared for two similar initial conditions, this exponent is related to the rate at which the subsequent trajectories diverge. A bounded system with a positive LE is one operational definition of chaotic behavior. Most methods for determining the LE have assumed thousands of observations generated from carefully controlled physical experiments. Less attention has been given to estimating the LE for biological and economic systems that are subjected to random perturbations and observed over a limited amount of time. Using nonparametric regression techniques (Neural Networks and Thin Plate Splines) it is possible to consistently estimate the LE. The properties of these methods have been studied using simulated data and are applied to a biological time series: marten fur returns for the Hudson Bay Company (18201900). Based on a nonparametric analysis there is little evidence for lowdimensional chaos in these data. Although these methods appear to work well for systems perturbed by small amounts of noise, finding chaos in a system with a significant stochastic component may be difficult.
Extension of Fill’s perfect rejection sampling algorithm to general chains (extended abstract
 Pages 37–52 in Monte Carlo Methods
, 2000
"... By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along th ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along the way, we unearth a simple connection between the Coupling From The Past (CFTP) algorithm originated by Propp and Wilson (1996) and our extension of Fill’s algorithm. Key words and phrases. Fill’s algorithm, Markov chain Monte Carlo, perfect sampling, exact sampling, rejection sampling, interruptibility, coupling from the past, readonce coupling from the past, monotone transition rule, realizable monotonicity, stochastic monotonicity, partially ordered set, coalescence, imputation,
Perfect Simulation and Backward Coupling
 Comm. Statist. Stochastic Models
"... Algorithms for perfect or exact simulation of random samples from the invariant measure of a Markov chain have received considerable recent attention following the introduction of the "couplingfromthepast" (CFTP) technique of Propp and Wilson. Here we place such algorithms in the context of backw ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Algorithms for perfect or exact simulation of random samples from the invariant measure of a Markov chain have received considerable recent attention following the introduction of the "couplingfromthepast" (CFTP) technique of Propp and Wilson. Here we place such algorithms in the context of backward coupling of stochastically recursive sequences. We show that although general backward couplings can be constructed for chains with finite mean forward coupling times, and can even be thought of as extending the classical "Loynes schemes" from queueing theory, successful "vertical" CFTP algorithms such as those of Propp and Wilson can be constructed if and only if the chain is uniformly geometric ergodic. We also relate the convergence moments for backward coupling methods to those of forward coupling times: the former typically lose at most one moment compared to the latter. Work supported in part by NSF Grant DMS9504561 and by CRDF Grant RM1226 y Postal Address: Institute of Math...
Simulating The Invariant Measures Of Markov Chains Using Backward Coupling At Regeneration Times
 Prob. Eng. Inf. Sci
, 1998
"... We develop an algorithm for simulating approximate random samples from the invariant measure of a Markov chain using backward coupling of embedded regeneration times. Related methods have been used effectively for finite chains and for stochastically monotone chains: here we propose a method of impl ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
We develop an algorithm for simulating approximate random samples from the invariant measure of a Markov chain using backward coupling of embedded regeneration times. Related methods have been used effectively for finite chains and for stochastically monotone chains: here we propose a method of implementation which avoids these restrictions by using a "cyclelength" truncation. We show that the coupling times have good theoretical properties and describe benefits and difficulties of implementing the methods in practice. 1 Introduction There has been considerable recent work on the development and application of algorithms that will enable the simulation of the invariant measure ß of a Markov chain, either exactly (that is, by drawing a random sample known to be from ß) or approximately, but with computable order of accuracy. These were sparked by the seminal paper of Propp and Wilson [18], and several variations and extensions of this idea have appeared in the literature including rece...
Recent Results About Stable Ergodicity
 In Smooth ergodic theory and its applications
, 2000
"... this paper, has been directed toward extending their results beyond Axiom A. ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
this paper, has been directed toward extending their results beyond Axiom A.
Ergodic Theorems for Markov chains represented by Iterated Function Systems
 BULL. POLISH ACAD. SCI. MATH
, 1998
"... We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be conside ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be considered as arising from an iterated function system (IFS). A distributional ergodic theorem, including rates of convergence in the Kantorovich distance is proved for Markov chains under the condition that an IFS representation is "stochastically contractive" and "stochastically bounded". We apply this result to prove our main theorem giving upper bounds for distances between invariant probability measures for iterated function systems. We also give some examples indicating how ergodic theorems for Markov chains may be proved by finding contractive IFS representations. These ideas are applied to some Markov chains arising from iterated function systems with place dependent probabilities. Name o...
Stochastic Approximation for Nonexpansive Maps: Application to QLearning Algorithms
, 2002
"... We discuss synchronous and asynchronous iterations of the form x k+1 = x k + γ(k)(h(x k)+w k), where h is a suitable map and {wk} is a deterministic or stochastic sequence satisfying suitable conditions. In particular, in the stochastic case, these are stochastic approximation iterations that can ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
We discuss synchronous and asynchronous iterations of the form x k+1 = x k + γ(k)(h(x k)+w k), where h is a suitable map and {wk} is a deterministic or stochastic sequence satisfying suitable conditions. In particular, in the stochastic case, these are stochastic approximation iterations that can be analyzed using the ODE approach based either on Kushner and Clark’s lemma for the synchronous case or on Borkar’s theorem for the asynchronous case. However, the analysis requires that the iterates {xk} be bounded, a factwhich is usually hard to prove. We develop a novel framework for proving boundedness in the deterministic framework, which is also applicable to the stochastic case when the deterministic hypotheses can be verified in the almost sure sense. This is based on scaling ideas and on the properties of Lyapunov functions. We then combine the boundedness property with Borkar’s stability analysis of ODEs involving nonexpansive mappings to prove convergence (with probability 1 in the stochastic case). We also apply our convergence analysis to Qlearning algorithms for stochastic shortest path problems and are able to relax some of the assumptions of the currently available results.
A cascade decomposition theory with applications to Markov and exchangeable cascades, T rans
 Amer. Math. Soc
, 1996
"... Abstract. A multiplicative random cascade refers to a positive Tmartingale in the sense of Kahane on the ultrametric space T = {0, 1,...,b−1} N. Anew approach to the study of multiplicative cascades is introduced. The methods apply broadly to the problems of: (i) nondegeneracy criterion, (ii) dime ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. A multiplicative random cascade refers to a positive Tmartingale in the sense of Kahane on the ultrametric space T = {0, 1,...,b−1} N. Anew approach to the study of multiplicative cascades is introduced. The methods apply broadly to the problems of: (i) nondegeneracy criterion, (ii) dimension spectra of carrying sets, and (iii) divergence of moments criterion. Specific applications are given to cascades generated by Markov and exchangeable processes, as well as to homogeneous independent cascades. 1. Positive Tmartingales Positive Tmartingales were introduced by JeanPierre Kahane as the general framework for independent multiplicative cascades and random coverings. Although originating in statistical theories of turbulence, the general framework also includes certain spinglass and random polymer models as well as various other spatial distributions of interest in both probability theory and the physical sciences. For basic definitions, let T be a compact metric space with Borel sigmafield B, and let (Ω, F,P) be a probability space together with an increasing sequence Fn,n =1,2,..., of subsigmafields of F. A positive Tmartingale is a sequence {Qn} of B×F−measurable nonnegative functions on T × Ω such that (i) For each t ∈ T,{Qn(t, ·):n=0,1,...} is a martingale adapted to Fn,n =0,1,...; (ii) For Pa.s. ω ∈ Ω, {Qn(·,ω):n=0,1,...} is a sequence of Borel measurable nonnegative realvalued functions on T. Let M +(T) denote the space of positive Borel measures on T and suppose that {Qn(t)} is a positive Tmartingale. For σ ∈ M +(T) such that q(t):=EQn(t) ∈ L1 (σ), let σn ≡ Qnσ denote the random measure defined by Qnσ << σ and dQnσ dσ (t):=Qn(t),t∈T. Then, essentially by the martingale convergence theorem, one obtains a random Borel measure σ ∞ ≡ Q∞σ such that for f ∈ C(T), (1.1) lim f(t)Qn(t, ω)σ(dt) = f(t)Q∞σ(dt, ω) a.s. n→∞