Results 1  10
of
48
Exact and computationally efficient likelihoodbased estimation for discretely observed diffusion processes
 Journal of the Royal Statistical Society, Series B: Statistical Methodology
, 2006
"... The objective of this paper is to present a novel methodology for likelihoodbased inference for discretely observed diffusions. We propose Monte Carlo methods, which build on recent advances on the exact simulation of diffusions, for performing maximum likelihood and Bayesian estimation. ..."
Abstract

Cited by 115 (21 self)
 Add to MetaCart
The objective of this paper is to present a novel methodology for likelihoodbased inference for discretely observed diffusions. We propose Monte Carlo methods, which build on recent advances on the exact simulation of diffusions, for performing maximum likelihood and Bayesian estimation.
Bayesian parameter inference for stochastic biochemical network models using particle mcmc
 Interface Focus
, 2011
"... Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters which must be estimated from time course data. Inference for the parameters of comple ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters which must be estimated from time course data. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but algorithms based on particle MCMC turn out to be a very effective computationally intensive approach to the problem. 1
Hierarchical models in the brain
 PLoS Computational Biology
, 2008
"... This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of statespace or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of a ..."
Abstract

Cited by 46 (9 self)
 Add to MetaCart
(Show Context)
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of statespace or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear timeseries analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.
Statistical Aspects of the fractional stochastic calculus
 ANN. STAT
, 2007
"... We apply the techniques of stochastic integration with respect to the fractional Brownian motion and the theory of regularity and supremum estimation for stochastic processes to study the maximum likelihood estimator (MLE) for the drift parameter of stochastic processes satisfying stochastic equati ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
We apply the techniques of stochastic integration with respect to the fractional Brownian motion and the theory of regularity and supremum estimation for stochastic processes to study the maximum likelihood estimator (MLE) for the drift parameter of stochastic processes satisfying stochastic equations driven by fractional Brownian motion with any level of Holderregularity (any Hurst parameter). We prove existence and strong consistency of the MLE for linear and nonlinear equations. We also prove that a version of the MLE using only discrete observations is still a strongly consistent estimator.
On simulated likelihood of discretely observed diffusion processes and comparison to closed form approximation
 Journal of Computational and Graphical Statistics
, 2007
"... This article focuses on two methods to approximate the loglikelihood function for univariate diffusions: 1) the simulation approach using a modified Brownian bridge as the importance sampler; and 2) the recent closedform approach. For the case of constant volatility, we give a theoretical justifica ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
This article focuses on two methods to approximate the loglikelihood function for univariate diffusions: 1) the simulation approach using a modified Brownian bridge as the importance sampler; and 2) the recent closedform approach. For the case of constant volatility, we give a theoretical justification of the modified Brownian bridge sampler by showing that it is exactly a Brownian bridge. We also discuss computational issues in the simulation approach such as accelerating numerical variance stabilizing transformation, computing derivatives of the simulated loglikelihood, and choosing initial values of parameter estimates. The two approaches are compared in the context of financial applications with annualized parameter values, where the diffusion model has an unknown transition density and has no analytical variance stabilizing transformation. The closedform expansion, particularly the secondorder closedform, is found to be computationally efficient and very accurate when the observation frequency is monthly or higher. It is more accurate in the center than in the tail of the transition density. The simulation approach combined with the variance stabilizing transformation is found to be more reliable than the closedform approach when the observation frequency is lower. Both methods performs better when the volatility level is lower, but the simulation method is more robust to the volatility nature of the diffusion model. When applied to two well known datasets of daily observations, the two methods yield similar parameter estimates in both datasets but slightly different loglikelihood in the case of higher volatility.
Monte Carlo maximum likelihood estimation for discretely observed diffusion porcesses
 The Annals of Statistics
, 2009
"... This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s. continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is com ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s. continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte Carlo MLE converges a.s. to the true MLE. For datasize n → ∞, we show that the number of Monte Carlo iterations should be tuned as O(n 1/2) and we demonstrate the consistency properties of the Monte Carlo MLE as an estimator of the true parameter value. 1. Introduction. We introduce a Monte Carlo
Inference for stochastic volatility models using time change transformations
, 2007
"... transformations ..."
(Show Context)
Bayesian Inference for Irreducible Diffusion Processes Using the PseudoMarginal Approach
, 2010
"... In this article we examine two relatively new MCMC methods which allow for Bayesian inference in diffusion models. First, the Monte Carlo within Metropolis (MCWM) algorithm (O’Neil, Balding, Becker, Serola and Mollison, 2000) uses an importance sampling approximation for the likelihood and yields a ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this article we examine two relatively new MCMC methods which allow for Bayesian inference in diffusion models. First, the Monte Carlo within Metropolis (MCWM) algorithm (O’Neil, Balding, Becker, Serola and Mollison, 2000) uses an importance sampling approximation for the likelihood and yields a limiting stationary distribution that can be made arbitrarily “close ” to the posterior distribution (MCWM is not a standard MetropolisHastings algorithm, however). The second method, described in Beaumont (2003) and generalized in Andrieu and Roberts (2009), introduces auxiliary variables and utilizes a standard MetropolisHastings algorithm on the enlarged space; this method preserves the original posterior distribution. When applied to diffusion models, this approach can be viewed as a generalization of the popular data augmentation schemes that sample jointly from the missing paths and the parameters of the diffusion volatility. We show that increasing the number of auxiliary variables dramatically increases the acceptance rates in the MCMC algorithm (compared to basic data augmentation schemes), allowing for rapid convergence and mixing. The efficacy of these methods is demonstrated in a simulation study of the CoxIngersollRoss (CIR) model and an analysis of a realworld dataset.
Likelihoodbased Inference for a Class of Multivariate Diffusions with Unobserved Paths
, 2007
"... This paper presents a Markov chain Monte Carlo algorithm for a class of multivariate diffusion models with unobserved paths. This class is of high practical interest as it includes most diffusion driven stochastic volatility models. The algorithm is based on a data augmentation scheme where the path ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper presents a Markov chain Monte Carlo algorithm for a class of multivariate diffusion models with unobserved paths. This class is of high practical interest as it includes most diffusion driven stochastic volatility models. The algorithm is based on a data augmentation scheme where the paths are treated as missing data. However, unless these paths are transformed so that the dominating measure is independent of any parameters, the algorithm becomes reducible. The methodology developed in Roberts and Stramer (2001 Biometrika 88(3):603621) circumvents the problem for scalar diffusions. We extend this framework to the class of models of this paper by introducing an appropriate reparametrisation of the likelihood that can be used to construct an irreducible data augmentation scheme. Practical implementation issues are considered and the methodology is applied to simulated data from the Heston model.