Results 1 
6 of
6
Uniform concentration inequality for ergodic diffusion processes. Stochastic Processes and their applications 117(2007
, 2011
"... Inthispaperaconcentration inequality is proved forthedeviation in the ergodic theorem in the case of discrete time observations of diffusion processes. The proof is based on the geometric ergodicity property for diffusion processes. As an application we consider the nonparametric pointwiseestimation ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Inthispaperaconcentration inequality is proved forthedeviation in the ergodic theorem in the case of discrete time observations of diffusion processes. The proof is based on the geometric ergodicity property for diffusion processes. As an application we consider the nonparametric pointwiseestimation problemforthedriftcoefficient underdiscretetime observations.
Rate of convergence of penalized likelihood context tree estimators
, 2007
"... Abstract: We find upper bounds for the probability of error of penalized likelihood context tree estimators, including the wellknown Bayesian Information Criterion (BIC). Our bounds are all explicit and apply to trees of bounded and unbounded depth. We show that the maximal decay for the probabilit ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract: We find upper bounds for the probability of error of penalized likelihood context tree estimators, including the wellknown Bayesian Information Criterion (BIC). Our bounds are all explicit and apply to trees of bounded and unbounded depth. We show that the maximal decay for the probability of error can be achieved with a penalizing term of the form n α, where n is the sample size and 0 < α < 1. As a consequence we obtain a strong consistency result for this penalizing term.
Stochastic chains with memory of variable length. Festschrift for Jorma Rissanen, Grünwald et al
 eds), TICSP Series 38:117–133
, 2008
"... Dedicated to Jorma Rissanen on his 75’th birthday Stochastic chains with memory of variable length constitute an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called context, is enough to predict the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Dedicated to Jorma Rissanen on his 75’th birthday Stochastic chains with memory of variable length constitute an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called context, is enough to predict the next symbol. These models were first introduced in the information theory literature by Rissanen (1983) as a universal tool to perform data compression. Recently, they have been used to model up scientific data in areas as different as biology, linguistics and music. This paper presents a personal introductory guide to this class of models focusing on the algorithm Context and its rate of convergence. 1
ADJUSTMENT COEFFICIENT FOR RISK PROCESSES IN SOME DEPENDENT CONTEXTS
, 901
"... Abstract. Following [18], we study the adjustment coefficient of ruin theory in a context of temporal dependency. We provide a consistent estimator of this coefficient, and perform some simulations. Adjustment coefficient w for risk processes may describe the behavior of ruin probability. Several re ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Following [18], we study the adjustment coefficient of ruin theory in a context of temporal dependency. We provide a consistent estimator of this coefficient, and perform some simulations. Adjustment coefficient w for risk processes may describe the behavior of ruin probability. Several results for sums of i.i.d. claims exist: in [12], H.U. Gerber gave an exact formula for finite time ruin probabilities involving the adjustment coefficient w, [19] provide a consistent estimator of w, V. Mammisch [15] gave a necessary and sufficient condition for the existence of w. In dependent contexts, let us cite H.U. Gerber [13] for autoregressive processes, [2] for an extension to ARMA processes and [3, 4] for the study of the adjustment coefficients in Markovian environments. The main objective of the parper is to provide a non parametric estimation of the adjustement coefficient introduced in [18] in dependent contexts. We give a general dependent context (weak temporal dependency in the sense of [7]) for which our estimator is consistent. The paper is organized as follows: • Section 1 contains the definitions and elementary properties of weakdependent processes as well as adjustment coefficient. To make short, wi, the independent coefficient, will be the adjustment coefficient if the process is i.i.d. while wd will be the adjustment coefficient of a dependent sequence. • In Section 2, we prove that wd may be seen as a limit (for r → ∞) of independent coefficients wi r. We also provide some general examples for which the adjustment coefficient wd may be defined. • Section 3 is devoted to the estimation of coefficients wi and wd and contains the main results: we construct consistent estimators (see Theorems 3.3, 3.5 and 3.10). Note that in [2], an estimation of wd is given for ARMA processes which is based on the estimation of the ARMA parameters. Our procedure is completely non parametric. • In Section 4 we provide some simulations. 1. Setting We consider (Yn)n∈N a sequence of random variables and Ru the event {Yn> u for some n ≥ 1}. Yn is interpreted as the value of the claim surplus
RATE OF CONVERGENCE OF PENALIZEDLIKELIHOOD CONTEXT TREE ESTIMATORS
, 2007
"... Abstract. We find upper bounds for the probability of error of the penalizedlikelihood type context tree estimators, where the trees are not assumed to be finite. This estimators includes the wellknown Bayesian Information Criterion (BIC). We show that the maximal decay for the probability of erro ..."
Abstract
 Add to MetaCart
Abstract. We find upper bounds for the probability of error of the penalizedlikelihood type context tree estimators, where the trees are not assumed to be finite. This estimators includes the wellknown Bayesian Information Criterion (BIC). We show that the maximal decay for the probability of error can be achieved with a penalized term of the form n α, with 0 < α < 1. 1.
Author manuscript, published in "Festschrift in honour of the 75th birthday of Jorma Rissanen (2008) 329463" STOCHASTIC CHAINS WITH MEMORY OF VARIABLE LENGTH
, 2013
"... Abstract. Stochastic chains with memory of variable length constitute an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called context, is enough to predict the next symbol. These models were first int ..."
Abstract
 Add to MetaCart
Abstract. Stochastic chains with memory of variable length constitute an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called context, is enough to predict the next symbol. These models were first introduced in the information theory literature by Rissanen (1983) as a universal tool to perform data compression. Recently, they have been used to model up scientific data in areas as different as biology, linguistics and music. This paper presents a personal introductory guide to this class of models focusing on the algorithm Context and its rate of convergence. 1.