Results 11  20
of
158
How noise matters
 Games and Economic Behavior
, 2003
"... Lipman for helpful discussions and to Murali Agastya for prodding him into writing Recent advances in evolutionary game theory have employed stochastic processes of noise in decisionmaking to select in favor of certain equilibria in coordination games. Noisy decisionmaking is justi ed on bounded rat ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Lipman for helpful discussions and to Murali Agastya for prodding him into writing Recent advances in evolutionary game theory have employed stochastic processes of noise in decisionmaking to select in favor of certain equilibria in coordination games. Noisy decisionmaking is justi ed on bounded rationality grounds, and consequently the sources of noise are left unmodelled. This methodological approach can only be successful if the results do not depend too much on the nature of the noise process. This paper investigates invariance to noise of the equilibrium selection results, both for the random matching paradigm that has characterized much of the recent literature and for a larger class of twostrategy population games where payo s may vary nonlinearly with the distribution of strategies among the population. Several parametrizations of noise reduction are investigated. The results show that a symmetry property of the noise process and (in the case of nonlinear payo s) bounds on the asymmetry of the payo functions su ce to preserve the selection results of the evolutionary literature. JEL Classi cation: C78
Betacoalescents and continuous stable random trees
, 2006
"... Coalescents with multiple collisions, also known as Λcoalescents, were introduced by Pitman and Sagitov in 1999. These processes describe the evolution of particles that undergo stochastic coagulation in such a way that several blocks can merge at the same time to form a single block. In the case t ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Coalescents with multiple collisions, also known as Λcoalescents, were introduced by Pitman and Sagitov in 1999. These processes describe the evolution of particles that undergo stochastic coagulation in such a way that several blocks can merge at the same time to form a single block. In the case that the measure Λ is the Beta(2 − α, α) distribution, they are also known to describe the genealogies of large populations where a single individual can produce a large number of offspring. Here we use a recent result of Birkner et al. to prove that Betacoalescents can be embedded in continuous stable random trees, about which much is known due to recent progress of Duquesne and Le Gall. Our proof is based on a construction of the DonnellyKurtz lookdown process using continuous random trees which is of independent interest. This produces a number of results concerning the smalltime behavior of Betacoalescents. Most notably, we recover an almost sure limit theorem of the authors for the number of blocks at small times, and give the multifractal spectrum corresponding to the emergence of blocks with atypical size. Also, we are able to find exact asymptotics for sampling formulae corresponding to the site frequency spectrum and allele frequency spectrum associated with mutations in the context of population genetics.
Some Probabilistic Aspects Of Set Partitions
 American Mathematical Monthly
, 1996
"... this paper, section (1.2) offers an elementary combinatorial proof of Dobinski's formula which seems simpler than other proofs in the literature (Rota [35], Berge [5], p. 44, Comtet [9], p. 211). This argument involves identities whose probabilistic interpretations are brought out later in the paper ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
this paper, section (1.2) offers an elementary combinatorial proof of Dobinski's formula which seems simpler than other proofs in the literature (Rota [35], Berge [5], p. 44, Comtet [9], p. 211). This argument involves identities whose probabilistic interpretations are brought out later in the paper. 1.1 Notation
ERGODIC THEORY of DIFFERENTIABLE DYNAMICAL SYSTEMS
, 1995
"... Ergodic Theory This section contains most of the ergodic theory background needed for these notes. A suitable reference for Sections 1.1 and 1.2 is [Wa]. I also like [S4]. For Section 1.3 see [Ro1] and [Ro2]. *This research is partially supported by NSF Typeset by A M ST E X 2 1.1. Basic notion ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Ergodic Theory This section contains most of the ergodic theory background needed for these notes. A suitable reference for Sections 1.1 and 1.2 is [Wa]. I also like [S4]. For Section 1.3 see [Ro1] and [Ro2]. *This research is partially supported by NSF Typeset by A M ST E X 2 1.1. Basic notions. Let (X; B; ¯) be a probability space, i.e. X is a set, B is a oealgebra of subsets of X, and ¯ is a measure on (X; B) s.t. ¯(X) = 1. If (X i ; B i ; ¯ i ); i = 1; 2; are probability spaces, a mapping T : X 1 ! X 2 is called measurable if 8A 2 B 2 ; T \Gamma1 A 2 B 1 . A measurable mapping T is said to be measure preserving if 8A 2 B 2 ; ¯ 1 (T \Gamma1 A) = ¯ 2 (A). We say that T is an invertible measure preserving transformation if T is bijective and both T and T \Gamma1 are measure preserving. We use the notation T : (X; B; ¯) \Psi to denote a measure preserving transformation (henceforth abbreviated as mpt) of a probability space to itself. Mention of B is suppressed when the ...
On the Dynamics and Performance of Stochastic Fluid Systems
, 1998
"... A (generalized) stochastic fluid system Q is defined as the onedimensional Skorokhod reflection of a finite variation process X (with possibly discontinuous paths). We write X as the (not necessarily minimal) difference of two positive measures, A, B, and prove an alternative "integral representati ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
A (generalized) stochastic fluid system Q is defined as the onedimensional Skorokhod reflection of a finite variation process X (with possibly discontinuous paths). We write X as the (not necessarily minimal) difference of two positive measures, A, B, and prove an alternative "integral representation" for Q. This representation forms the basis for deriving a "Little's law" for an appropriately constructed stationary version of Q. For the special case where B is the Lebesgue measure, a distributional version of Little's law is derived. This is done both at the arrival and departure points of the system. The latter result necessitates the consideration of a "dual process" to Q. Examples of models for X , including finite variation Levy processes with countably many jumps on finite intervals, are given in order to illustrate the ideas and point out potential applications in performance evaluation. Keywords: RANDOM MEASURES, STATIONARY PROCESSES, PALM PROBABILITIES, QUEUEING THEORY, LIT...
SpikeTimingDependent Plasticity and Relevant Mutual Information Maximization
, 2003
"... Synaptic plasticity was recently shown to depend on the relative timing of the pre and postsynaptic spikes. This article analytically derives a spikedependent learning rule based on the principle of information maximization for a single neuron with spiking inputs. This rule is then transformed int ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Synaptic plasticity was recently shown to depend on the relative timing of the pre and postsynaptic spikes. This article analytically derives a spikedependent learning rule based on the principle of information maximization for a single neuron with spiking inputs. This rule is then transformed into a biologically feasible rule, which is compared to the experimentally observed plasticity. This comparison reveals that the biological rule increases information to a nearoptimal level and provides insights into the structure of biological plasticity. It shows that the time dependency of synaptic potentiation should be determined by the synaptic transfer function and membrane leak. Potentiation consists of weightdependent and weightindependent components whose weights are of the same order of magnitude. It further suggests that synaptic depression should be triggered by rare and relevant inputs but at the same time serves to unlearn the baseline statistics of the network’s inputs. The optimal depression curve is uniformly extended in time, but biological constraints that cause the cell to forget past events may lead to a different shape, which is not specified by our current model. The structure of the optimal rule thus suggests a computational account for several temporal characteristics of the biological spiketimingdependent rules.
Consumptionbased asset pricing with higher cumulants,” manuscript
, 2008
"... I extend the EpsteinZinlognormal consumptionbased assetpricing model to allow for general i.i.d. consumption growth processes. Information about the higher moments—equivalently, cumulants—of consumption growth is encoded in the cumulantgenerating function (CGF). I express four observable quantit ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
I extend the EpsteinZinlognormal consumptionbased assetpricing model to allow for general i.i.d. consumption growth processes. Information about the higher moments—equivalently, cumulants—of consumption growth is encoded in the cumulantgenerating function (CGF). I express four observable quantities (the equity premium, riskless rate, consumptionwealth ratio and mean consumption growth) and the HansenJagannathan bound in terms of the CGF, and present applications. Models in which consumption is subject to occasional disasters can be handled easily and flexibly within the framework. The importance of higher cumulants is a doubleedged sword: those model parameters which are most important for asset prices, such as disaster parameters, are also the hardest to calibrate. It is therefore desirable to make statements which do not depend on a particular calibrated consumption process. First, I use properties of the CGF to derive restrictions on the timepreference rate and elasticity of intertemporal substitution that must hold in any EpsteinZini.i.d. model which is consistent with the observable quantities. Second, I show that “good deal ” bounds on the maximal Sharpe
A probabilistic analysis of some tree algorithms, in "Annals of Applied Probability
, 2005
"... In this paper a general class of tree algorithms is analyzed. It is shown that, by using an appropriate probabilistic representation of the quantities of interest, the asymptotic behavior of these algorithms can be obtained quite easily without resorting to the usual complex analysis techniques. Thi ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
In this paper a general class of tree algorithms is analyzed. It is shown that, by using an appropriate probabilistic representation of the quantities of interest, the asymptotic behavior of these algorithms can be obtained quite easily without resorting to the usual complex analysis techniques. This approach gives a unified probabilistic treatment of these questions. It simplifies and extends some of the results known in this domain. 1. Introduction. A
Quantitative models for Operational Risk: Extremes, dependence and aggregation
 Journal of Banking and Finance
, 2006
"... • Basel II (Banking) and Solvency 2 (Insurance) • AMA approach to Operational Risk Our contribution: ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
• Basel II (Banking) and Solvency 2 (Insurance) • AMA approach to Operational Risk Our contribution: