Results 1  10
of
191
A Nonparametric Model of Term Structure Dynamics and the Market Price of Interest Rate Risk
, 1997
"... This article presents a technique for nonparametrically estimating continuoustime di#usion processes which are observed at discrete intervals. We illustrate the methodology by using daily three and six month Treasury Bill data, from January 1965 to July 1995, to estimate the drift and di#usion of t ..."
Abstract

Cited by 126 (5 self)
 Add to MetaCart
This article presents a technique for nonparametrically estimating continuoustime di#usion processes which are observed at discrete intervals. We illustrate the methodology by using daily three and six month Treasury Bill data, from January 1965 to July 1995, to estimate the drift and di#usion of the short rate, and the market price of interest rate risk. While the estimated di#usion is similar to that estimated by Chan, Karolyi, Longsta# and Sanders (1992), there is evidence of substantial nonlinearity in the drift. This is close to zero for low and medium interest rates, but mean reversion increases sharply at higher interest rates.
Variable Length Markov Chains
 Annals of Statistics
, 1999
"... We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of higher order, but with memory of variable length yielding a much bigger and structurally richer class of models than ordinary higher order Markov ..."
Abstract

Cited by 85 (5 self)
 Add to MetaCart
We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of higher order, but with memory of variable length yielding a much bigger and structurally richer class of models than ordinary higher order Markov chains. From a more algorithmic view, the VLMC model class has attracted interest in information theory and machine learning but statistical properties have not been explored very much. Provided that good estimation is available, an additional structural richness of the model class enhances predictive power by finding a better tradeoff between model bias and variance and allows better structural description which can be of specific interest. The latter is exemplified with some DNA data. A version of the treestructured context algorithm, proposed by Rissanen (1983) in an information theoretical setup, is shown to have new good asymptotic properties for estimation in the class of VLMC's, even when the underlying model increases in dimensionality: consistent estimation of minimal state spaces and mixing properties of fitted models are given. We also propose a new bootstrap scheme based on fitted VLMC's. We show its validity for quite general stationary categorical time series and for a broad range of statistical procedures. AMS 1991 subject classifications. Primary 62M05; secondary 60J10, 62G09, 62M10, 94A15 Key words and phrases. Bootstrap, categorical time series, central limit theorem, context algorithm, data compression, finitememory sources, FSMX model, KullbackLeibler distance, model selection, tree model. Short title: Variable Length Markov Chain 1 Research supported in part by the Swiss National Science Foundation. Part of the work has been done while visiting th...
The bootstrap
 In Handbook of Econometrics
, 2001
"... The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an a ..."
Abstract

Cited by 75 (1 self)
 Add to MetaCart
The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as the
Bootstraps for Time Series
, 1999
"... We compare and review block, sieve and local bootstraps for time series and thereby illuminate theoretical facts as well as performance on nitesample data. Our (re) view is selective with the intention to get a new and fair picture about some particular aspects of bootstrapping time series. The ge ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
We compare and review block, sieve and local bootstraps for time series and thereby illuminate theoretical facts as well as performance on nitesample data. Our (re) view is selective with the intention to get a new and fair picture about some particular aspects of bootstrapping time series. The generality of the block bootstrap is contrasted by sieve bootstraps. We discuss implementational dis/advantages and argue that two types of sieves outperform the block method, each of them in its own important niche, namely linear and categorical processes, respectively. Local bootstraps, designed for nonparametric smoothing problems, are easy to use and implement but exhibit in some cases low performance. Key words and phrases. Autoregression, block bootstrap, categorical time series, context algorithm, double bootstrap, linear process, local bootstrap, Markov chain, sieve bootstrap, stationary process. 1 Introduction Bootstrapping can be viewed as simulating a statistic or statistical pro...
Bootstrap Methods in Econometrics: Theory and Numerical Performance
 Eds.), Advances in Economics and Econometrics: Theory and Applications, Seventh World Congress, Vol. III
, 1997
"... 1. ..."
HigherOrder Improvements of a Computationally Attractive k−Step Bootstrap for Extremum Estimators
 Econometrica
, 2002
"... COWLES FOUNDATION DISCUSSION PAPER NO. 1230 ..."
A Threestep Method for Choosing the Number of Bootstrap Repetitions
 Econometrica
, 2000
"... This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, pvalues, and bias correction. For each of these problems, the paper provides a threestep method for choosing B to achieve a ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, pvalues, and bias correction. For each of these problems, the paper provides a threestep method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test’s critical value, test’s pvalue, or biascorrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B��. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.
Automatic BlockLength Selection for the Dependent Bootstrap
 Econometric Reviews
, 2004
"... We review the different block bootstrap methods for time series, and present them in a unified framework. We then revisit a recent result of Lahiri [Lahiri, S. N. (1999b). Theoretical comparisons of block bootstrap methods, Ann. Statist. ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
We review the different block bootstrap methods for time series, and present them in a unified framework. We then revisit a recent result of Lahiri [Lahiri, S. N. (1999b). Theoretical comparisons of block bootstrap methods, Ann. Statist.
Regular and Modified KernelBased Estimators of Integrated Variance: The Case with Independent Noise
, 2004
"... We consider kernelbased estimators of integrated variances in the presence of independent market microstructure effects. We derive the bias and variance properties for all regular kernelbased estimators and derive a lower bound for their asymptotic variance. Further we show that the subsamplebased ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
We consider kernelbased estimators of integrated variances in the presence of independent market microstructure effects. We derive the bias and variance properties for all regular kernelbased estimators and derive a lower bound for their asymptotic variance. Further we show that the subsamplebased estimator is closely related to a Bartletttype kernel estimator. The small difference between the two estimators due to end effects, turns out to be key for the consistency of the subsampling estimator. This observation leads us to a modified class of kernelbased estimators, which are also consistent. We study the efficiency of our new kernelbased procedure. We show that optimal modified kernelbased estimator converges to the integrated variance at rate m 1/4, where m is the number of intraday returns.
Bayesian Model Assessment and Comparison Using CrossValidation Predictive Densities
 Neural Computation
, 2002
"... In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimat ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the expected utility estimate, as it describes the uncertainty in the estimate. The distributions of the expected utility estimates can also be used to compare models, for example, by computing the probability of one model having a better expected utility than some other model. We propose an approach using crossvalidation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions. We also discuss the probabilistic assumptions made and properties of two practical crossvalidation methods, importance sampling and kfold crossvalidation. As illustrative examples, we use MLP neural networks and Gaussian Processes (GP) with Markov chain Monte Carlo sampling in one toy problem and two challenging realworld problems.