Results 1  10
of
28
Filtering Via Simulation: Auxiliary Particle Filters
, 1997
"... This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution. Both problems ar ..."
Abstract

Cited by 519 (15 self)
 Add to MetaCart
This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution. Both problems are tackled in this paper. We believe we have largely solved the first problem and have reduced the order of magnitude of the second. In addition we introduce the idea of stratification into the particle filter which allows us to perform online Bayesian calculations about the parameters which index the models and maximum likelihood estimation. The new methods are illustrated by using a stochastic volatility model and a time series model of angles. Some key words: Filtering, Markov chain Monte Carlo, Particle filter, Simulation, SIR, State space. 1 1
Stochastic Volatility: Likelihood Inference And Comparison With Arch Models
, 1994
"... this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum ..."
Abstract

Cited by 354 (37 self)
 Add to MetaCart
this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum likelihood estimator with that of quasilikelihood and Bayesian estimators proposed in the literature. We also compare the fit of the stochastic volatility model to that of ARCH models using the likelihood criterion to illustrate the flexibility of the framework presented. Some key words: ARCH, Bayes estimation, Gibbs sampler, Heteroscedasticity, Maximum likelihood, Quasimaximum likelihood, Simulation, Stochastic EM algorithm, Stochastic volatility, Stock returns. 1 INTRODUCTION
Error Bands for Impulse Responses
 Econometrica
, 1999
"... We show how correctly to extend known methods for generating error bands in reduced form VAR’s to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We fo ..."
Abstract

Cited by 87 (3 self)
 Add to MetaCart
We show how correctly to extend known methods for generating error bands in reduced form VAR’s to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We focus on bands that characterize the shape of the likelihood. Such bands are not classical confidence regions. We explain that classical confidence regions mix information about parameter location with information about model fit, and hence can be misleading as summaries of the implications of the data for the location of parameters. Because classical confidence regions also present conceptual and computational problems in multivariate time series models, we suggest that likelihoodbased bands, rather than approximate confidence bands based on asymptotic theory, be standard in reporting results for this type of model. 1 I.
Estimation of Stochastic Volatility Models with Diagnostics
 Journal of Econometrics
, 1995
"... Efficient Method of Moments (EMM) is used to fit the standard stochastic volatility model and various extensions to several daily financial time series. EMM matches to the score of a model determined by data analysis called the score generator. Discrepancies reveal characteristics of data that stoch ..."
Abstract

Cited by 80 (9 self)
 Add to MetaCart
Efficient Method of Moments (EMM) is used to fit the standard stochastic volatility model and various extensions to several daily financial time series. EMM matches to the score of a model determined by data analysis called the score generator. Discrepancies reveal characteristics of data that stochastic volatility models cannot approximate. The two score generators employed here are "Semiparametric ARCH" and "Nonlinear Nonparametric". With the first, the standard model is rejected, although some extensions are accepted. With the second, all versions are rejected. The extensions required for an adequate fit are so elaborate that nonparametric specifications are probably more convenient. Corresponding author: George Tauchen, Duke University, Department of Economics, Social Science Building, Box 90097, Durham NC 277080097 USA, phone 19196601812, FAX 19196848974, email get@tauchen.econ.duke.edu. 0 1 Introduction The stochastic volatility model has been proposed as a descripti...
Prediction via Orthogonalized Model Mixing
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1994
"... In this paper we introduce an approach and algorithms for model mixing in large prediction problems with correlated predictors. We focus on the choice of predictors in linear models, and mix over possible subsets of candidate predictors. Our approach is based on expressing the space of models in ter ..."
Abstract

Cited by 50 (9 self)
 Add to MetaCart
In this paper we introduce an approach and algorithms for model mixing in large prediction problems with correlated predictors. We focus on the choice of predictors in linear models, and mix over possible subsets of candidate predictors. Our approach is based on expressing the space of models in terms of an orthogonalization of the design matrix. Advantages are both statistical and computational. Statistically, orthogonalization often leads to a reduction in the number of competing models by eliminating correlations. Computationally, large model spaces cannot be enumerated; recent approaches are based on sampling models with high posterior probability via Markov chains. Based on orthogonalization of the space of candidate predictors, we can approximate the posterior probabilities of models by products of predictorspecific terms. This leads to an importance sampling function for sampling directly from the joint distribution over the model space, without resorting to Markov chains. Comp...
Estimating Ratios of Normalizing Constants for Densities with Different Dimensions
 STATISTICA SINICA
, 1997
"... In Bayesian inference, a Bayes factor is defined as the ratio of posterior odds versus prior odds where posterior odds is simply a ratio of the normalizing constants of two posterior densities. In many practical problems, the two posteriors have different dimensions. For such cases, the current Mont ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
In Bayesian inference, a Bayes factor is defined as the ratio of posterior odds versus prior odds where posterior odds is simply a ratio of the normalizing constants of two posterior densities. In many practical problems, the two posteriors have different dimensions. For such cases, the current Monte Carlo methods such as the bridge sampling method (Meng and Wong 1996), the path sampling method (Gelman and Meng 1994), and the ratio importance sampling method (Chen and Shao 1994) cannot directly be applied. In this article, we extend importance sampling, bridge sampling, and ratio importance sampling to problems of different dimensions. Then we find global optimal importance sampling, bridge sampling, and ratio importance sampling in the sense of minimizing asymptotic relative meansquare errors of estimators. Implementation algorithms, which can asymptotically achieve the optimal simulation errors, are developed and two illustrative examples are also provided.
Estimation methods for stochastic volatility models: a survey
 Journal of Economic Surveys
, 2004
"... The empirical application of Stochastic Volatility (SV) models has been limited due to the difficulties involved in the evaluation of the likelihood function. However, recently there has been fundamental progress in this area due to the proposal of several new estimation methods that try to overcome ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
The empirical application of Stochastic Volatility (SV) models has been limited due to the difficulties involved in the evaluation of the likelihood function. However, recently there has been fundamental progress in this area due to the proposal of several new estimation methods that try to overcome this problem, being at the same time, empirically feasible. As a consequence, several extensions of the SV models have been proposed and their empirical implementation is increasing. In this paper, we review the main estimators of the parameters and the volatility of univariate SV models proposed in the literature. We describe the main advantages and limitations of each of the methods both from the theoretical and empirical point of view. We complete the survey with an application of the most important procedures to the S&P 500 stock price index.
Modelbased clustering of multiple time series
 CEPR Discussion Paper
, 2004
"... We propose to use the attractiveness of pooling relatively short time series that display similar dynamics, but without restricting to pooling all into one group. We suggest to estimate the appropriate grouping of time series simultaneously along with the groupspecific model parameters. We cast est ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We propose to use the attractiveness of pooling relatively short time series that display similar dynamics, but without restricting to pooling all into one group. We suggest to estimate the appropriate grouping of time series simultaneously along with the groupspecific model parameters. We cast estimation into the Bayesian framework and use Markov chain Monte Carlo simulation methods. We discuss model identification and base model selection on marginal likelihoods. A simulation study documents the efficiency gains in estimation and forecasting that are realized when appropriately grouping the time series of a panel. Two economic applications illustrate the usefulness of the method in analyzing also extensions to Markov switching within clusters and heterogeneity within clusters, respectively. JEL classification: C11,C33,E32
Diagnostics for Time Series Analysis.
, 1997
"... This paper shows how to combine MCMC and importance sampling to estimate efficiently the sequence of standard normal random variables used to form the goodness of fit statistics to test for the adequacy of a time series model. In particular, the methodology allows testing the adequacy of a very gene ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This paper shows how to combine MCMC and importance sampling to estimate efficiently the sequence of standard normal random variables used to form the goodness of fit statistics to test for the adequacy of a time series model. In particular, the methodology allows testing the adequacy of a very general state space model with unknown parameters and latent variables. The MCMC is run for only a small percentage of the data rather than at each time point as in Kim and Shephard (1994) and functionals at other time points are estimated as weighted averages. The effectiveness of the methodology is studied by an extensive simulation for an autoregressive model which allows for complex interventions. The methodology is also applied to two real examples. The first example determines the goodness of fit of an autoregressive model of zinc concentration. The second example determines the goodness of fit of a stochastic volatility model for U.S. Treasury bill data. Using the methods in the paper we also show how to compute the marginal likelihood of a time series model subject to interventions. Such marginal likelihoods are used for Bayesian model comparison as in Kass and Raftery (1996) and Chib (1995). Geweke (1994) proposed a combination of MCMC and importance sampling to calculate the marginal likelihood of a time series when there are no interventions in the model and our approach extends that of Geweke (1994) to allow for interventions. The connection between our work and the simulated filtering literature is discussed briefly at the end of section 2. The paper is organized as follows. Section 2 introduces the methodology and section 3 describes the test statistics. Section 4 studies using simulation the effectiveness of the methodology when applied to several autoregressive ...
Bayesian Option Pricing Using Asymmetric Garch
 CORE DP 9759, LouvainlaNeuve
, 1997
"... This paper shows how one can compute option prices from a Bayesian inference viewpoint, using an econometric model for the dynamics of the return and of the volatility of the underlying asset. The proposed evaluation of an option is the predictive expectation of its payoff function. The predictive d ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This paper shows how one can compute option prices from a Bayesian inference viewpoint, using an econometric model for the dynamics of the return and of the volatility of the underlying asset. The proposed evaluation of an option is the predictive expectation of its payoff function. The predictive distribution of this function provides a natural metric with respect to which the predictive option price, or other option evaluations, can be gauged. The proposed method is compared to the Black and Scholes evaluation, in which a predictive mean volatility is plugged, but which does not provide a natural metric. The methods are illustrated using an asymmetric GARCH model with a data set on a stock index in Brussels. The persistence of the volatility process is linked to the prediction horizon and to the option maturity.