Results 1  10
of
217
Filtering Via Simulation: Auxiliary Particle Filters
, 1997
"... This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution. Both problems ar ..."
Abstract

Cited by 519 (15 self)
 Add to MetaCart
This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution. Both problems are tackled in this paper. We believe we have largely solved the first problem and have reduced the order of magnitude of the second. In addition we introduce the idea of stratification into the particle filter which allows us to perform online Bayesian calculations about the parameters which index the models and maximum likelihood estimation. The new methods are illustrated by using a stochastic volatility model and a time series model of angles. Some key words: Filtering, Markov chain Monte Carlo, Particle filter, Simulation, SIR, State space. 1 1
Stochastic Volatility: Likelihood Inference And Comparison With Arch Models
, 1994
"... this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum ..."
Abstract

Cited by 354 (37 self)
 Add to MetaCart
this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum likelihood estimator with that of quasilikelihood and Bayesian estimators proposed in the literature. We also compare the fit of the stochastic volatility model to that of ARCH models using the likelihood criterion to illustrate the flexibility of the framework presented. Some key words: ARCH, Bayes estimation, Gibbs sampler, Heteroscedasticity, Maximum likelihood, Quasimaximum likelihood, Simulation, Stochastic EM algorithm, Stochastic volatility, Stock returns. 1 INTRODUCTION
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE Trans Image Processing
, 2003
"... Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussi ..."
Abstract

Cited by 350 (18 self)
 Add to MetaCart
Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Answering the Skeptics: Yes, Standard Volatility Models Do Provide Accurate Forecasts
"... Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, ..."
Abstract

Cited by 271 (33 self)
 Add to MetaCart
Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, a voluminous literature has emerged for modeling the temporal dependencies in financial market volatility at the daily and lower frequencies using ARCH and stochastic volatility type models. Most of these studies find highly significant insample parameter estimates and pronounced intertemporal volatility persistence. Meanwhile, when judged by standard forecast evaluation criteria, based on the squared or absolute returns over daily or longer forecast horizons, standard volatility models provide seemingly poor forecasts. The present paper demonstrates that, contrary to this contention, in empirically realistic situations the models actually produce strikingly accurate interdaily forecasts f...
Modeling and Forecasting Realized Volatility
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are a ..."
Abstract

Cited by 265 (34 self)
 Add to MetaCart
this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are approximately Gaussian. Third, the longrun dynamics of realized logarithmic volatilities are well approximated by a fractionallyintegrated longmemory process. Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic realized volatilities: a fractionallyintegrated Gaussian vector autoregression (VAR) . Importantly, our approach explicitly permits measurement errors in the realized volatilities. Comparing the resulting volatility forecasts to those obtained from currently popular daily volatility models and more complicated highfrequency models, we find that our simple Gaussian VAR forecasts generally produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities, the resulting lognormalnormal mixture forecast distribution provides conditionally wellcalibrated density forecasts of returns, from which we obtain accurate estimates of conditional return quantiles. In the remainder of this paper, we proceed as follows. We begin in section 2 by formally developing the relevant quadratic variation theory within a standard frictionless arbitragefree multivariate pricing environment. In section 3 we discuss the practical construction of realized volatilities from highfrequency foreign exchange returns. Next, in section 4 we summarize the salient distributional features of r...
Evaluating Interval Forecasts
 International Economic Review
, 1997
"... This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic interval forecasts around po ..."
Abstract

Cited by 166 (10 self)
 Add to MetaCart
This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic interval forecasts around point predictions. The insight was that the intervals should be narrow in tranquil times and wide in volatile times, so that the occurrences of observations outside the interval forecast would be spread out over the sample and not come in clusters. An interval forecast that 3 fails to account for higherorder dynamics may be correct on average (have correct unconditional coverage), but in any given period it will have incorrect conditional coverage characterized by clustered outliers. These concepts will be defined precisely below, and tests for correct conditional coverage are suggested. Chatfield (1993) emphasizes that model misspecification is a much more important source of poor interval forecasting than is simple estimation error. Thus, our testing criterion and the tests of this criterion are model free. In this regard, the approach taken here is similar to the one taken by Diebold and Mariano (1995). This paper can also be seen as establishing a formal framework for the ideas suggested in Granger, White and Kamstra (1989). Recently, financial market participants have shown increasing interest in interval forecasts as measures of uncertainty. Thus, we apply our methods to the interval forecasts provided by J.P. Morgan (1995). Furthermore, the socalled "ValueatRisk" measures suggested for risk measurement correspond to tail forecasts, i.e., onesided interval forecasts of portfolio returns. Lopez (1996) evaluates these types of forecasts applying the procedures develo...
Emerging Equity Market Volatility
, 1997
"... Understanding volatility in emerging capital markets is important for determining the cost of capital and for evaluating direct investment and asset allocation decisions. We provide an approach that allows the relative importance of world and local information to change through time in both the expe ..."
Abstract

Cited by 157 (28 self)
 Add to MetaCart
Understanding volatility in emerging capital markets is important for determining the cost of capital and for evaluating direct investment and asset allocation decisions. We provide an approach that allows the relative importance of world and local information to change through time in both the expected returns and conditional variance processes. Our timeseries and crosssectional models analyze the reasons that volatility is different across emerging markets, particularly with respect to the timing of capital market reforms. We find that capital market liberalizations often increase the correlation between local market returns and the world market but do not drive up local market volatility.
Estimation of TailRelated Risk Measures for Heteroscedastic Financial Time Series: an Extreme Value Approach
 Journal of Empirical Finance
, 1998
"... We propose a method for estimating VaR and related risk measures describing the tail of the conditional distribution of a heteroscedastic financial return series. Our approach combines pseudomaximumlikelihood fitting of GARCH models to estimate the current volatility and extreme value theory (EVT) ..."
Abstract

Cited by 102 (4 self)
 Add to MetaCart
We propose a method for estimating VaR and related risk measures describing the tail of the conditional distribution of a heteroscedastic financial return series. Our approach combines pseudomaximumlikelihood fitting of GARCH models to estimate the current volatility and extreme value theory (EVT) for estimating the tail of the innovation distribution of the GARCH model. We use our method to estimate conditional quantiles (VaR) and conditional expected shortfalls (the expected size of a return exceeding VaR), this being an alternative measure of tail risk with better theoretical properties than the quantile. Using backtesting of historical daily return series we show that our procedure gives better oneday estimates than methods which ignore the heavy tails of the innovations or the stochastic nature of the volatility. With the help of our fitted models we adopt a Monte Carlo approach to estimating the conditional quantiles of returns over multipleday horizons and find that t...
MULTIVARIATE GARCH MODELS: A SURVEY
"... This paper surveys the most important developments in multivariate ARCHtype modelling. It reviews the model specifications and inference methods, and identifies likely directions of future research. ..."
Abstract

Cited by 102 (7 self)
 Add to MetaCart
This paper surveys the most important developments in multivariate ARCHtype modelling. It reviews the model specifications and inference methods, and identifies likely directions of future research.
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 92 (17 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.