Results 1  10
of
59
Iterated random functions
 SIAM Review
, 1999
"... Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys ..."
Abstract

Cited by 131 (1 self)
 Add to MetaCart
Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys the field and presents some new examples. There is a simple unifying idea: the iterates of random Lipschitz functions converge if the functions are contracting on the average. 1. Introduction. The
Pathological Outcomes of Observational Learning
 ECONOMETRICA
, 1999
"... This paper explores how Bayesrational individuals learn sequentially from the discrete actions of others. Unlike earlier informational herding papers, we admit heterogeneous preferences. Not only may typespecific `herds' eventually arise, but a new robust possibility emerges: confounded learning. ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
This paper explores how Bayesrational individuals learn sequentially from the discrete actions of others. Unlike earlier informational herding papers, we admit heterogeneous preferences. Not only may typespecific `herds' eventually arise, but a new robust possibility emerges: confounded learning. Beliefs may converge to a limit point where history oers no decisive lessons for anyone, and each type's actions forever nontrivially split between two actions. To verify that our identied limit outcomes do arise, we exploit the Markovmartingale character of beliefs. Learning dynamics are stochastically stable near a fixed point in many Bayesian learning models like this one.
Finding Chaos in Noisy Systems
, 1991
"... In the past twenty years there has been much interest in the physical and biological sciences in nonlinear dynamical systems that appear to have random, unpredictable behavior. One important parameter of a dynamic system is the dominant Lyapunov exponent (LE). When the behavior of the system is comp ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
In the past twenty years there has been much interest in the physical and biological sciences in nonlinear dynamical systems that appear to have random, unpredictable behavior. One important parameter of a dynamic system is the dominant Lyapunov exponent (LE). When the behavior of the system is compared for two similar initial conditions, this exponent is related to the rate at which the subsequent trajectories diverge. A bounded system with a positive LE is one operational definition of chaotic behavior. Most methods for determining the LE have assumed thousands of observations generated from carefully controlled physical experiments. Less attention has been given to estimating the LE for biological and economic systems that are subjected to random perturbations and observed over a limited amount of time. Using nonparametric regression techniques (Neural Networks and Thin Plate Splines) it is possible to consistently estimate the LE. The properties of these methods have been studied using simulated data and are applied to a biological time series: marten fur returns for the Hudson Bay Company (18201900). Based on a nonparametric analysis there is little evidence for lowdimensional chaos in these data. Although these methods appear to work well for systems perturbed by small amounts of noise, finding chaos in a system with a significant stochastic component may be difficult.
Extension of Fill’s perfect rejection sampling algorithm to general chains (extended abstract
 Pages 37–52 in Monte Carlo Methods
, 2000
"... By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along th ..."
Abstract

Cited by 41 (12 self)
 Add to MetaCart
By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along the way, we unearth a simple connection between the Coupling From The Past (CFTP) algorithm originated by Propp and Wilson (1996) and our extension of Fill’s algorithm. Key words and phrases. Fill’s algorithm, Markov chain Monte Carlo, perfect sampling, exact sampling, rejection sampling, interruptibility, coupling from the past, readonce coupling from the past, monotone transition rule, realizable monotonicity, stochastic monotonicity, partially ordered set, coalescence, imputation,
Perfect Simulation and Backward Coupling
 Comm. Statist. Stochastic Models
"... Algorithms for perfect or exact simulation of random samples from the invariant measure of a Markov chain have received considerable recent attention following the introduction of the "couplingfromthepast" (CFTP) technique of Propp and Wilson. Here we place such algorithms in the context of backw ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Algorithms for perfect or exact simulation of random samples from the invariant measure of a Markov chain have received considerable recent attention following the introduction of the "couplingfromthepast" (CFTP) technique of Propp and Wilson. Here we place such algorithms in the context of backward coupling of stochastically recursive sequences. We show that although general backward couplings can be constructed for chains with finite mean forward coupling times, and can even be thought of as extending the classical "Loynes schemes" from queueing theory, successful "vertical" CFTP algorithms such as those of Propp and Wilson can be constructed if and only if the chain is uniformly geometric ergodic. We also relate the convergence moments for backward coupling methods to those of forward coupling times: the former typically lose at most one moment compared to the latter. Work supported in part by NSF Grant DMS9504561 and by CRDF Grant RM1226 y Postal Address: Institute of Math...
Simulating The Invariant Measures Of Markov Chains Using Backward Coupling At Regeneration Times
 Prob. Eng. Inf. Sci
, 1998
"... We develop an algorithm for simulating approximate random samples from the invariant measure of a Markov chain using backward coupling of embedded regeneration times. Related methods have been used effectively for finite chains and for stochastically monotone chains: here we propose a method of impl ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
We develop an algorithm for simulating approximate random samples from the invariant measure of a Markov chain using backward coupling of embedded regeneration times. Related methods have been used effectively for finite chains and for stochastically monotone chains: here we propose a method of implementation which avoids these restrictions by using a "cyclelength" truncation. We show that the coupling times have good theoretical properties and describe benefits and difficulties of implementing the methods in practice. 1 Introduction There has been considerable recent work on the development and application of algorithms that will enable the simulation of the invariant measure ß of a Markov chain, either exactly (that is, by drawing a random sample known to be from ß) or approximately, but with computable order of accuracy. These were sparked by the seminal paper of Propp and Wilson [18], and several variations and extensions of this idea have appeared in the literature including rece...
Recent Results About Stable Ergodicity
 In Smooth ergodic theory and its applications
, 2000
"... this paper, has been directed toward extending their results beyond Axiom A. ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
this paper, has been directed toward extending their results beyond Axiom A.
Ergodic Theorems for Markov chains represented by Iterated Function Systems
 BULL. POLISH ACAD. SCI. MATH
, 1998
"... We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be conside ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be considered as arising from an iterated function system (IFS). A distributional ergodic theorem, including rates of convergence in the Kantorovich distance is proved for Markov chains under the condition that an IFS representation is "stochastically contractive" and "stochastically bounded". We apply this result to prove our main theorem giving upper bounds for distances between invariant probability measures for iterated function systems. We also give some examples indicating how ergodic theorems for Markov chains may be proved by finding contractive IFS representations. These ideas are applied to some Markov chains arising from iterated function systems with place dependent probabilities. Name o...
Stochastic Approximation for Nonexpansive Maps: Application to QLearning Algorithms
, 2002
"... We discuss synchronous and asynchronous iterations of the form x k+1 = x k + γ(k)(h(x k)+w k), where h is a suitable map and {wk} is a deterministic or stochastic sequence satisfying suitable conditions. In particular, in the stochastic case, these are stochastic approximation iterations that can ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
We discuss synchronous and asynchronous iterations of the form x k+1 = x k + γ(k)(h(x k)+w k), where h is a suitable map and {wk} is a deterministic or stochastic sequence satisfying suitable conditions. In particular, in the stochastic case, these are stochastic approximation iterations that can be analyzed using the ODE approach based either on Kushner and Clark’s lemma for the synchronous case or on Borkar’s theorem for the asynchronous case. However, the analysis requires that the iterates {xk} be bounded, a factwhich is usually hard to prove. We develop a novel framework for proving boundedness in the deterministic framework, which is also applicable to the stochastic case when the deterministic hypotheses can be verified in the almost sure sense. This is based on scaling ideas and on the properties of Lyapunov functions. We then combine the boundedness property with Borkar’s stability analysis of ODEs involving nonexpansive mappings to prove convergence (with probability 1 in the stochastic case). We also apply our convergence analysis to Qlearning algorithms for stochastic shortest path problems and are able to relax some of the assumptions of the currently available results.
Locally Contractive Iterated Function Systems
, 1998
"... . An iterated function system on X ae R d is defined by successively iterating an i.i.d. sequence of random Lipschitz functions from X to X. This paper shows how Fn = f 1 ffi \Delta \Delta \Delta ffi fn may converge even in the absence of the strong contraction conditions  for instance, Lipschi ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
. An iterated function system on X ae R d is defined by successively iterating an i.i.d. sequence of random Lipschitz functions from X to X. This paper shows how Fn = f 1 ffi \Delta \Delta \Delta ffi fn may converge even in the absence of the strong contraction conditions  for instance, Lipschitz constant smaller than 1 on average  which earlier work has required. Instead, it is posited that there be a region of contraction which compensates for the noncontractive or even expansive part of the functions. Applications to selfmodifying random walks and to random logistic maps are given. 1. introduction A metric space (X; ae), together with a probability measure on the set F of maps from X to itself, defines an iterated function system. Consider a sequence f 1 ; f 2 ; : : : of i.i.d. Fvalued random variables distributed like , and form the following two compositions: Fn (x) = f 1 ffi f 2 ffi \Delta \Delta \Delta ffi fn (x); e Fn (x) = fn ffi fn\Gamma1 ffi \Delta \Delta \De...