Results 1  10
of
44
Bayesian inference for nonlinear multivariate diffusion models observed with error
, 2006
"... Diffusion processes governed by stochastic differential equations (SDEs) are a well established tool for modelling continuous time data from a wide range of areas. Consequently, techniques have been developed to estimate diffusion parameters from partial and discrete observations. Likelihood based i ..."
Abstract

Cited by 66 (10 self)
 Add to MetaCart
Diffusion processes governed by stochastic differential equations (SDEs) are a well established tool for modelling continuous time data from a wide range of areas. Consequently, techniques have been developed to estimate diffusion parameters from partial and discrete observations. Likelihood based inference can be problematic as closed form transition densities are rarely available. One widely used solution involves the introduction of latent data points between every pair of observations to allow an EulerMaruyama approximation of the true transition densities to become accurate. In recent literature, Markov chain Monte Carlo (MCMC) methods have been used to sample the posterior distribution of latent data and model parameters; however, naive schemes suffer from a mixing problem that worsens with the degree of augmentation. In this paper, we explore an MCMC scheme whose performance is not adversely affected by the number of latent values. We illustrate the methodology by estimating parameters governing an autoregulatory gene network, using partial and discrete data that is subject to measurement error.
Bayesian sequential inference for nonlinear multivariate diffusions
 Statistics and Computing
, 2006
"... In this paper, we adapt recently developed simulationbased sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes. The estimation framework involves the introduction of m −1 latent data points between every pair of observations. Sequential ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
In this paper, we adapt recently developed simulationbased sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes. The estimation framework involves the introduction of m −1 latent data points between every pair of observations. Sequential MCMC methods are then used to sample the posterior distribution of the latent data and the model parameters online. The method is applied to the estimation of parameters in a simple stochastic volatility model (SV) of the U.S. shortterm interest rate. We also provide a simulation study to validate our method, using synthetic data generated by the SV model with parameters calibrated to match weekly observations of the U.S. shortterm interest rate. 1
Iterated Filtering
, 2011
"... Inference for partially observed Markov process models has been a longstanding methodological challenge with many scientific and engineering applications. Iterated filtering algorithms maximize the likelihood function for partially observed Markov process models by solving a recursive sequence of fi ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Inference for partially observed Markov process models has been a longstanding methodological challenge with many scientific and engineering applications. Iterated filtering algorithms maximize the likelihood function for partially observed Markov process models by solving a recursive sequence of filtering problems. We present new theoretical results pertaining to the convergence of iterated filtering algorithms implemented via sequential Monte Carlo filters. This theory complements the growing body of empirical evidence that iterated filtering algorithms provide an effective inference strategy for scientific models of nonlinear dynamic systems. The first step in our theory involves studying a new recursive approach for maximizing the likelihood function of a latent variable model, when this likelihood is evaluated via importance sampling. This leads to the consideration of an iterated importance sampling algorithm which serves as a simple special case of iterated filtering, and may have applicability in its own right. 1
Sequential learning, predictive regressions, and optimal portfolio returns, working paper
, 2009
"... This paper develops particle filtering and learning for sequential inference in empirical asset pricing models. We provide a computationally feasible toolset for sequential parameter learning, hypothesis testing, and model monitoring, incorporating multiple observable variables, unobserved stocha ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This paper develops particle filtering and learning for sequential inference in empirical asset pricing models. We provide a computationally feasible toolset for sequential parameter learning, hypothesis testing, and model monitoring, incorporating multiple observable variables, unobserved stochastic volatility, and unobserved “drifting ” regression coefficients. Sequential inference allows us to observe how the views of economic decision makers evolve in real time. Empirically, we analyze time series predictability of equity returns, using both the traditional dividend yield and net payout yield, which incorporates issuances and repurchases. We find that the data rejects the traditional model for both predictors, in favor of models with drifting coefficients or stochastic volatility. We study the optimal portfolio allocation problem under parameter, state variable, and model uncertainty, and show that the Bayesian portfolios are more stable and have better outofsample performance than rolling regressions.
Bayesian Inference for Irreducible Diffusion Processes Using the PseudoMarginal Approach
, 2010
"... In this article we examine two relatively new MCMC methods which allow for Bayesian inference in diffusion models. First, the Monte Carlo within Metropolis (MCWM) algorithm (O’Neil, Balding, Becker, Serola and Mollison, 2000) uses an importance sampling approximation for the likelihood and yields a ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this article we examine two relatively new MCMC methods which allow for Bayesian inference in diffusion models. First, the Monte Carlo within Metropolis (MCWM) algorithm (O’Neil, Balding, Becker, Serola and Mollison, 2000) uses an importance sampling approximation for the likelihood and yields a limiting stationary distribution that can be made arbitrarily “close ” to the posterior distribution (MCWM is not a standard MetropolisHastings algorithm, however). The second method, described in Beaumont (2003) and generalized in Andrieu and Roberts (2009), introduces auxiliary variables and utilizes a standard MetropolisHastings algorithm on the enlarged space; this method preserves the original posterior distribution. When applied to diffusion models, this approach can be viewed as a generalization of the popular data augmentation schemes that sample jointly from the missing paths and the parameters of the diffusion volatility. We show that increasing the number of auxiliary variables dramatically increases the acceptance rates in the MCMC algorithm (compared to basic data augmentation schemes), allowing for rapid convergence and mixing. The efficacy of these methods is demonstrated in a simulation study of the CoxIngersollRoss (CIR) model and an analysis of a realworld dataset.
Exploring TimeVarying Jump Intensities: Evidence from S&P500 Returns and Options
, 2008
"... Standard empirical investigations of jumpdynamics in return and volatility are fairly complicated due to the presence of multiple latent continuoustime factors. We present a new discretetime framework that combines GARCH processes with rich specifications of jumps in returns and volatility. Our m ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Standard empirical investigations of jumpdynamics in return and volatility are fairly complicated due to the presence of multiple latent continuoustime factors. We present a new discretetime framework that combines GARCH processes with rich specifications of jumps in returns and volatility. Our models can be estimated with ease using standard maximum likelihood techniques. We provide a tractable risk neutralization framework for this class of models which allows for separate identification of risk premia for the jump and normal innovations. We anchor our models in the continuous time literature by providing continuous time limits of the models. The models are evaluated by return fitting on a long sample of S&P500 index returns as well as by option valuation on a large option data set. We find strong empirical support for timevarying jump intensities. A model with a jump intensity that is affine in the conditional variance performs particularly well both in return fitting and option valuation. Our implementation allows for multiple jumps per day and we find evidence of this most notably on Black Monday in October 1987. Our results also confirmtheimportanceofjumprisk premia for option valuation: jumps cannot significantly improve the performance of option pricing models unless sizeable jump risk premia are present. JEL Classification: G12
Investor and central bank uncertainty and fear measures embedded in index options
, 2010
"... We provide a structural Bayesian equilibrium learning model that captures the interaction of the uncertainty about fundamentals of investors and the central bank. In our model central bank policy is able to affect fundamental state transitions and investors can learn about future fundamental states ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We provide a structural Bayesian equilibrium learning model that captures the interaction of the uncertainty about fundamentals of investors and the central bank. In our model central bank policy is able to affect fundamental state transitions and investors can learn about future fundamental states from observing policy variables. We show that investors ’ fear measures — implied volatility (ATMIV) and putcall implied volatility ratios (P/C) — lead industrial capacity utilization, which the central bank reacts to so the fear measures can be used to predict interest rates. The model endogenously generates several of the time series properties of option prices including the counter (pro) cyclicality of ATMIV (P/C), the Vshape (inverse Vshape) relation between ATMIV (P/C) and monetary policy variables, the positive relation between the level and absolute changes in ATMIV, the negative beta of volatility as a priced systematic risk factor, and an economically significant amount of time variation in the volatility premium.
Particle filtering
 In
, 2008
"... Models with latent state variables abound in economics and finance. These state variables capture changes in the economic environment that are not directly observed by econometricians and/or economic decision makers. Classic examples include stochastic volatility models, regression models with time ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Models with latent state variables abound in economics and finance. These state variables capture changes in the economic environment that are not directly observed by econometricians and/or economic decision makers. Classic examples include stochastic volatility models, regression models with timevarying coefficients, jumpdiffusion models, and
Particle filtering and parameter learning
, 2007
"... This paper provides a new approach for sequentially learning parameters and states in a wide class of state space models using particle filters. Our approach generates direct i.i.d. samples from a particle approximation to the joint posterior distribution of both parameters and latent states, avoidi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper provides a new approach for sequentially learning parameters and states in a wide class of state space models using particle filters. Our approach generates direct i.i.d. samples from a particle approximation to the joint posterior distribution of both parameters and latent states, avoiding the use of and the degeneracies inherent in sequential importance sampling. We illustrate the efficiency of our approach by sequentially learning parameters and filtering states in two models: a logstochastic volatility model and robust version of the Kalman filter model with terrors in both the observation and state equation. In both cases, we show using simulated data that our approach efficiently learns the parameters and states sequentially, generating higher effective sample sizes than existing algorithms. We use the approach for two real data examples, sequentially learning in a stochastic volatility model of Nasdaq stock returns and about predictable components in a model of core inflation.
Modeling Stochastic Volatility with Leverage and Jumps: A ‘Smooth’ Particle Filtering Approach
, 2008
"... In this paper we provide a unified methodology in order to conduct likelihoodbased inference on the unknown parameters of a general class of discretetime stochastic volatility models, characterized by both a leverage effect and jumps in returns. Given the nonlinear/nonGaussian statespace form, a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper we provide a unified methodology in order to conduct likelihoodbased inference on the unknown parameters of a general class of discretetime stochastic volatility models, characterized by both a leverage effect and jumps in returns. Given the nonlinear/nonGaussian statespace form, approximating the likelihood for the parameters is conducted with output generated by the particle filter. Methods are employed to ensure that the approximating likelihood is continuous as a function of the unknown parameters thus enabling the use of NewtonRaphson type maximization algorithms. Our approach is robust and efficient relative to alternative Markov Chain Monte Carlo schemes employed in such contexts. The technique is applied to daily returns data for various stock price indices. We find strong evidence in favour of a leverage effect in all cases. Jumps are an important component in two out of the four series we consider.