Results 1  10
of
111
Iterated random functions
 SIAM Review
, 1999
"... Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys ..."
Abstract

Cited by 223 (2 self)
 Add to MetaCart
Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys the field and presents some new examples. There is a simple unifying idea: the iterates of random Lipschitz functions converge if the functions are contracting on the average. 1. Introduction. The
What are SRB measures, and which dynamical systems have them?
"... This is a slightly expanded version of the text of a lecture I gave in a conference at Rutgers University in honor of David Ruelle and Yasha Sinai. In this lecture I reported on some of the main results surrounding an invariant measure introduced by Sinai, Ruelle and Bowen in the 1970s. SRB measures ..."
Abstract

Cited by 130 (12 self)
 Add to MetaCart
This is a slightly expanded version of the text of a lecture I gave in a conference at Rutgers University in honor of David Ruelle and Yasha Sinai. In this lecture I reported on some of the main results surrounding an invariant measure introduced by Sinai, Ruelle and Bowen in the 1970s. SRB measures, as these objects are called, play an important role in the ergodic theory of dissipative dynamical systems with chaotic behavior. Roughly speaking, • SRB measures are the invariant measures most compatible with volume when volume is not preserved; • they provide a mechanism for explaining how local instability on attractors can produce coherent statistics for orbits starting from large sets in the basin. An outline of this paper is as follows. The original work of Sinai, Ruelle and Bowen was carried out in the context of Anosov and Axiom A systems. For these dynamical systems they identified and constructed an invariant measure which is uniquely important from several different points of view. These pioneering works are reviewed in Section 1. Subsequently, a nonuniform, almosteverywhere notion of hyperbolicity expressed in terms of Lyapunov exponents was developed. This notion provided a new framework for the ideas in the last paragraph. While not all of the previous characterizations are equivalent in this broader setting, the central ideas have remained intact, leading to a more general notion of SRB measures. This is discussed in Section 2. The extension above opened the door to the possibility that the dynamics on many attractors are described by SRB measures. Determining if this is (or is not) the case, however, let alone proving it, has turned out to be very challenging. No genuinely nonuniformly hyperbolic examples were known until the early 1990s, when SRB measures were constructed for certain Hénon maps. Today we still do not have a good understanding of which dynamical systems admit SRB measures, but some progress has been made; a sample of it is reported in Section 3.
Pathological Outcomes of Observational Learning
 ECONOMETRICA
, 1999
"... This paper explores how Bayesrational individuals learn sequentially from the discrete actions of others. Unlike earlier informational herding papers, we admit heterogeneous preferences. Not only may typespecific `herds' eventually arise, but a new robust possibility emerges: confounded learn ..."
Abstract

Cited by 112 (3 self)
 Add to MetaCart
This paper explores how Bayesrational individuals learn sequentially from the discrete actions of others. Unlike earlier informational herding papers, we admit heterogeneous preferences. Not only may typespecific `herds' eventually arise, but a new robust possibility emerges: confounded learning. Beliefs may converge to a limit point where history oers no decisive lessons for anyone, and each type's actions forever nontrivially split between two actions. To verify that our identied limit outcomes do arise, we exploit the Markovmartingale character of beliefs. Learning dynamics are stochastically stable near a fixed point in many Bayesian learning models like this one.
Finding Chaos in Noisy Systems
, 1991
"... In the past twenty years there has been much interest in the physical and biological sciences in nonlinear dynamical systems that appear to have random, unpredictable behavior. One important parameter of a dynamic system is the dominant Lyapunov exponent (LE). When the behavior of the system is comp ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
In the past twenty years there has been much interest in the physical and biological sciences in nonlinear dynamical systems that appear to have random, unpredictable behavior. One important parameter of a dynamic system is the dominant Lyapunov exponent (LE). When the behavior of the system is compared for two similar initial conditions, this exponent is related to the rate at which the subsequent trajectories diverge. A bounded system with a positive LE is one operational definition of chaotic behavior. Most methods for determining the LE have assumed thousands of observations generated from carefully controlled physical experiments. Less attention has been given to estimating the LE for biological and economic systems that are subjected to random perturbations and observed over a limited amount of time. Using nonparametric regression techniques (Neural Networks and Thin Plate Splines) it is possible to consistently estimate the LE. The properties of these methods have been studied using simulated data and are applied to a biological time series: marten fur returns for the Hudson Bay Company (18201900). Based on a nonparametric analysis there is little evidence for lowdimensional chaos in these data. Although these methods appear to work well for systems perturbed by small amounts of noise, finding chaos in a system with a significant stochastic component may be difficult.
Extension of Fill’s perfect rejection sampling algorithm to general chains (extended abstract
 Pages 37–52 in Monte Carlo Methods
, 2000
"... By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along th ..."
Abstract

Cited by 47 (14 self)
 Add to MetaCart
(Show Context)
By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along the way, we unearth a simple connection between the Coupling From The Past (CFTP) algorithm originated by Propp and Wilson (1996) and our extension of Fill’s algorithm. Key words and phrases. Fill’s algorithm, Markov chain Monte Carlo, perfect sampling, exact sampling, rejection sampling, interruptibility, coupling from the past, readonce coupling from the past, monotone transition rule, realizable monotonicity, stochastic monotonicity, partially ordered set, coalescence, imputation,
Perfect simulation and backward coupling
 Comm. Statist. Stochastic Models
, 1998
"... ..."
(Show Context)
Recent Results About Stable Ergodicity
 In Smooth ergodic theory and its applications
, 2000
"... this paper, has been directed toward extending their results beyond Axiom A. ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
this paper, has been directed toward extending their results beyond Axiom A.
Ergodic Theorems for Markov chains represented by Iterated Function Systems
 BULL. POLISH ACAD. SCI. MATH
, 1998
"... We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be conside ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
We consider Markov chains represented in the form Xn+1 = f(Xn ; I n ), where fI n g is a sequence of independent, identically distributed (i.i.d.) random variables, and where f is a measurable function. Any Markov chain fXng on a Polish state space may be represented in this form i.e. can be considered as arising from an iterated function system (IFS). A distributional ergodic theorem, including rates of convergence in the Kantorovich distance is proved for Markov chains under the condition that an IFS representation is "stochastically contractive" and "stochastically bounded". We apply this result to prove our main theorem giving upper bounds for distances between invariant probability measures for iterated function systems. We also give some examples indicating how ergodic theorems for Markov chains may be proved by finding contractive IFS representations. These ideas are applied to some Markov chains arising from iterated function systems with place dependent probabilities. Name o...
2004), Equilibria in financial markets with heterogeneous agents: A probabilistic perspective
 J. Math. Econ
"... We analyse financial market models in which agents form their demand for an asset on the basis of their forecasts of future prices and where their forecasting rules may change over time, as a result of the influence of other traders. Agents will switch from one rule to another stochastically, and th ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
We analyse financial market models in which agents form their demand for an asset on the basis of their forecasts of future prices and where their forecasting rules may change over time, as a result of the influence of other traders. Agents will switch from one rule to another stochastically, and the price and profits process will reflect these switches. Among the possible rules are “chartist ” or extrapolatory rules. Prices can exhibit transient behaviour when chartists predominate. However, if the probability that an agent will switch to being a “chartist ” is not too high then the process does not explode. There are occasional bubbles but they inevitably burst. In fact, we prove that the limit distribution of the price process exists and is unique. This limit distribution may be thought of as the appropriate equilibrium notion for such markets. A number of characteristics of financial time series can be captured by this sort of model. In particular, the presence of chartists fattens the tails of the stationary distribution. JEL subject classification: C62,D84,G12 Key words: financial markets, stochastic price processes, limit distributions, forecasting rules
Stationary Markov Equilibria
 Econometrica
, 1994
"... We establish conditions which (in various settings) guarantee the existence of equilibria described by ergodic Markov processes with a Borel state space S. Let 9(S) denote the probability measures on S, and let s G(s) c 4?(S) be a (possibly emptyvalued) correspondence with closed graph characteri ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
We establish conditions which (in various settings) guarantee the existence of equilibria described by ergodic Markov processes with a Borel state space S. Let 9(S) denote the probability measures on S, and let s G(s) c 4?(S) be a (possibly emptyvalued) correspondence with closed graph characterizing intertemporal consistency, as prescribed by some particular model. A nonempty measurable set J c S is selfjustified if G(s) n 9?(J) is not empty for all s E J. A timehomogeneous Markov equilibrium (THME) for G is a selfjustified set J and a measurable selection TI: J9 _(J) from the restriction of G to J. The paper gives sufficient conditions for existence of compact selfjustified sets, and applies the theorem: If G is convexvalued and has a compact selfjustified set, then G has an THME with an ergodic measure. The applications are (i) stochastic overlapping generations equilibria, (ii) an extension of the Lucas (1978) asset market equilibrium mnodel to the case of heterogeneous agents, and (iii) equilibria for discounted stochastic games with uncountable state spaces.