Results 1  10
of
376
Filtering via simulation: Auxiliary particle filters
 Journal of the American Statistical Association
, 1999
"... ..."
Stochastic volatility: Likelihood inference and comparison with ARCH models,Review of Economic Studies
, 1998
"... ..."
A LongMemory Property of Stock Market Returns and a New Model
 Journal of Empirical Finance
, 1993
"... A ‘long memory ’ property of stock market returns is investigated in this paper. It is found that not only there is substantially more correlation between absolute returns than returns themselves, but the power transformation of the absolute return lrfl ” also has quite high autocorrelation for lo ..."
Abstract

Cited by 534 (19 self)
 Add to MetaCart
A ‘long memory ’ property of stock market returns is investigated in this paper. It is found that not only there is substantially more correlation between absolute returns than returns themselves, but the power transformation of the absolute return lrfl ” also has quite high autocorrelation for long lags. It is possible to characterize lrfld to be ‘long memory ’ and this property is strongest when d is around 1. This result appears to argue against ARCH type specifications based upon squared returns. But our MonteCarlo study shows that both ARCH type models based on squared returns and those based on absolute return can produce this property. A new general class of models is proposed which allows the power 6 of the heteroskedasticity equation to be estimated from the data. 1.
Modeling and Forecasting Realized Volatility
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are a ..."
Abstract

Cited by 505 (47 self)
 Add to MetaCart
this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are approximately Gaussian. Third, the longrun dynamics of realized logarithmic volatilities are well approximated by a fractionallyintegrated longmemory process. Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic realized volatilities: a fractionallyintegrated Gaussian vector autoregression (VAR) . Importantly, our approach explicitly permits measurement errors in the realized volatilities. Comparing the resulting volatility forecasts to those obtained from currently popular daily volatility models and more complicated highfrequency models, we find that our simple Gaussian VAR forecasts generally produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities, the resulting lognormalnormal mixture forecast distribution provides conditionally wellcalibrated density forecasts of returns, from which we obtain accurate estimates of conditional return quantiles. In the remainder of this paper, we proceed as follows. We begin in section 2 by formally developing the relevant quadratic variation theory within a standard frictionless arbitragefree multivariate pricing environment. In section 3 we discuss the practical construction of realized volatilities from highfrequency foreign exchange returns. Next, in section 4 we summarize the salient distributional features of r...
Answering the Skeptics: Yes, Standard Volatility Models Do Provide Accurate Forecasts
"... Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, ..."
Abstract

Cited by 492 (42 self)
 Add to MetaCart
Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, a voluminous literature has emerged for modeling the temporal dependencies in financial market volatility at the daily and lower frequencies using ARCH and stochastic volatility type models. Most of these studies find highly significant insample parameter estimates and pronounced intertemporal volatility persistence. Meanwhile, when judged by standard forecast evaluation criteria, based on the squared or absolute returns over daily or longer forecast horizons, standard volatility models provide seemingly poor forecasts. The present paper demonstrates that, contrary to this contention, in empirically realistic situations the models actually produce strikingly accurate interdaily forecasts f...
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE Trans Image Processing
, 2003
"... Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussi ..."
Abstract

Cited by 487 (17 self)
 Add to MetaCart
(Show Context)
Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Evaluating Interval Forecasts
 International Economic Review
, 1997
"... This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic int ..."
Abstract

Cited by 305 (11 self)
 Add to MetaCart
This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic interval forecasts around point predictions. The insight was that the intervals should be narrow in tranquil times and wide in volatile times, so that the occurrences of observations outside the interval forecast would be spread out over the sample and not come in clusters. An interval forecast that 3 fails to account for higherorder dynamics may be correct on average (have correct unconditional coverage), but in any given period it will have incorrect conditional coverage characterized by clustered outliers. These concepts will be defined precisely below, and tests for correct conditional coverage are suggested. Chatfield (1993) emphasizes that model misspecification is a much more important source of poor interval forecasting than is simple estimation error. Thus, our testing criterion and the tests of this criterion are model free. In this regard, the approach taken here is similar to the one taken by Diebold and Mariano (1995). This paper can also be seen as establishing a formal framework for the ideas suggested in Granger, White and Kamstra (1989). Recently, financial market participants have shown increasing interest in interval forecasts as measures of uncertainty. Thus, we apply our methods to the interval forecasts provided by J.P. Morgan (1995). Furthermore, the socalled "ValueatRisk" measures suggested for risk measurement correspond to tail forecasts, i.e., onesided interval forecasts of portfolio returns. Lopez (1996) evaluates these types of forecasts applying the procedures develo...
MULTIVARIATE GARCH MODELS: A SURVEY
"... This paper surveys the most important developments in multivariate ARCHtype modelling. It reviews the model specifications and inference methods, and identifies likely directions of future research. ..."
Abstract

Cited by 235 (9 self)
 Add to MetaCart
This paper surveys the most important developments in multivariate ARCHtype modelling. It reviews the model specifications and inference methods, and identifies likely directions of future research.
Econometric analysis of realised volatility and its use in estimating stochastic volatility models
 Journal of the Royal Statistical Society, Series B
, 2002
"... Number 71 ..."