Results 1  10
of
66
A survey of sequential Monte Carlo methods for economics and finance
, 2009
"... This paper serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation based algorithms used to compute the highdimensional and/or complex integrals that arise regularly in ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This paper serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation based algorithms used to compute the highdimensional and/or complex integrals that arise regularly in applied work. These methods are becoming increasingly popular in economics and finance; from dynamic stochastic general equilibrium models in macroeconomics to option pricing. The objective of this paper is to explain the basics of the methodology, provide references to the literature, and cover some of the theoretical results that justify the methods in practice.
Bayesian nonparametric inference of switching linear dynamical systems
, 2010
"... Abstract—Many complex dynamical phenomena can be effectively modeled by a system that switches among a set of conditionally linear dynamical modes. We consider two such models: the switching linear dynamical system (SLDS) and the switching vector autoregressive (VAR) process. Our Bayesian nonparamet ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract—Many complex dynamical phenomena can be effectively modeled by a system that switches among a set of conditionally linear dynamical modes. We consider two such models: the switching linear dynamical system (SLDS) and the switching vector autoregressive (VAR) process. Our Bayesian nonparametric approach utilizes a hierarchical Dirichlet process prior to learn an unknown number of persistent, smooth dynamical modes. We additionally employ automatic relevance determination to infer a sparse set of dynamic dependencies allowing us to learn SLDS with varying state dimension or switching VAR processes with varying autoregressive order. We develop a sampling algorithm that combines a truncated approximation to the Dirichlet process with efficient joint sampling of the mode and state sequences. The utility and flexibility of our model are demonstrated on synthetic data, sequences of dancing honey bees, the IBOVESPA stock index and a maneuvering target tracking application. Index Terms—Autoregressive processes, Bayesian methods, hidden Markov models, statespace methods, time series analysis,
ON THE USE OF BACKWARD SIMULATION IN THE PARTICLE GIBBS SAMPLER
"... The particle Gibbs (PG) sampler was introduced in [1] as a way to incorporate a particle filter (PF) in a Markov chain Monte Carlo (MCMC) sampler. The resulting method was shown to be an efficient tool for joint Bayesian parameter and state inference in nonlinear, nonGaussian statespace models. Ho ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
The particle Gibbs (PG) sampler was introduced in [1] as a way to incorporate a particle filter (PF) in a Markov chain Monte Carlo (MCMC) sampler. The resulting method was shown to be an efficient tool for joint Bayesian parameter and state inference in nonlinear, nonGaussian statespace models. However, the mixing of the PG kernel can be very poor when there is severe degeneracy in the PF. Hence, the success of the PG sampler heavily relies on the, often unrealistic, assumption that we can implement a PF without suffering from any considerate degeneracy. However, as pointed out by Whiteley [2] in the discussion following [1], the mixing can be improved by adding a backward simulation step to the PG sampler. Here, we investigate this further, derive an explicit PG sampler with backward simulation (denoted PGBSi) and show that this indeed is a valid MCMC method. Furthermore, we show in a numerical example that backward simulation can lead to a considerable increase in performance over the standard PG sampler. Index Terms — Particle Markov chain Monte Carlo, particle filter, particle Gibbs, backward simulation, Gibbs sampling.
Ancestor Sampling for Particle Gibbs
"... We present a novel method in the family of particle MCMC methods that we refer to as particle Gibbs with ancestor sampling (PGAS). Similarly to the existing PG with backward simulation (PGBS) procedure, we use backward sampling to (considerably) improve the mixing of the PG kernel. Instead of usin ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We present a novel method in the family of particle MCMC methods that we refer to as particle Gibbs with ancestor sampling (PGAS). Similarly to the existing PG with backward simulation (PGBS) procedure, we use backward sampling to (considerably) improve the mixing of the PG kernel. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. We apply the PGAS framework to the challenging class of nonMarkovian statespace models. We develop a truncation strategy of these models that is applicable in principle to any backwardsimulationbased method, but which is particularly well suited to the PGAS framework. In particular, as we show in a simulation study, PGAS can yield an orderofmagnitude improved accuracy relative to PGBS due to its robustness to the truncation error. Several application examples are discussed, including RaoBlackwellized particle smoothing and inference in degenerate statespace models. 1
A semiparametric Bayesian approach to Wiener system identification
"... Abstract: We consider a semiparametric, i.e. a mixed parametric/nonparametric, model of a Wiener system. We use a statespace model for the linear dynamical system and a nonparametric Gaussian process (GP) model for the static nonlinearity. The GP model is a flexible model that can describe differen ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract: We consider a semiparametric, i.e. a mixed parametric/nonparametric, model of a Wiener system. We use a statespace model for the linear dynamical system and a nonparametric Gaussian process (GP) model for the static nonlinearity. The GP model is a flexible model that can describe different types of nonlinearities while avoiding making strong assumptions such as monotonicity. We derive an inferential method based on recent advances in Monte Carlo statistical methods, known as Particle Markov Chain Monte Carlo (PMCMC). The idea underlying PMCMC is to use a particle filter (PF) to generate a sample state trajectory in a Markov chain Monte Carlo sampler. We use a recently proposed PMCMC sampler, denoted particle Gibbs with backward simulation, which has been shown to be efficient even when we use very few particles in the PF. The resulting method is used in a simulation study to identify two different Wiener systems with noninvertible nonlinearities. 1.
Modeling and
 Control of Induction Machine, Technip Edition
, 1995
"... Abstract: Sparsitypromoting priors have become increasingly popular over recent years due to an increased number of regression and classification applications involving a large number of predictors. In time series applications where observations are collected over time, it is often unrealistic to a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract: Sparsitypromoting priors have become increasingly popular over recent years due to an increased number of regression and classification applications involving a large number of predictors. In time series applications where observations are collected over time, it is often unrealistic to assume that the underlying sparsity pattern is fixed. We propose here an original class of flexible Bayesian linear models for dynamic sparsity modelling. The proposed class of models expands upon the existing Bayesian literature on sparse regression using generalized multivariate hyperbolic distributions. The properties of the models are explored through both analytic results and simulation studies. We demonstrate the model on a financial application where it is shown that it accurately represents the patterns seen in the analysis of stock and derivative data, and is able to detect major events by filtering an artificial portfolio of assets. Keywords: generalized hyperbolic, Gaussian mixture models, sparsity, dynamic regression
An Introduction to Particle Methods with Financial Applications
 in Numerical Methods in Finance of the Springer Proceedings in Mathematics
, 2012
"... Abstract The aim of this article is to give a general introduction to the theory of interacting particle methods, and an overview of its applications to computational finance. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract The aim of this article is to give a general introduction to the theory of interacting particle methods, and an overview of its applications to computational finance.
Priors over Recurrent Continuous Time Processes
"... We introduce the GammaExponential Process (GEP), a prior over a large family of continuous time stochastic processes. A hierarchical version of this prior (HGEP; the Hierarchical GEP) yields a useful model for analyzing complex time series. Models based on HGEPs display many attractive properties: ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We introduce the GammaExponential Process (GEP), a prior over a large family of continuous time stochastic processes. A hierarchical version of this prior (HGEP; the Hierarchical GEP) yields a useful model for analyzing complex time series. Models based on HGEPs display many attractive properties: conjugacy, exchangeability and closedform predictive distribution for the waiting times, and exact Gibbs updates for the time scale parameters. After establishing these properties, we show how posterior inference can be carried efficiently using Particle MCMC methods [1]. This yields a MCMC algorithm that can resample entire sequences atomically while avoiding the complications of introducing slice and stick auxiliary variables of the beam sampler [2]. We applied our model to the problem of estimating the disease progression in multiple sclerosis [3], and to RNA evolutionary modeling [4]. In both domains, we found that our model outperformed the standard rate matrix estimation approach. 1
Convergence Rate of Markov Chain Methods for Genomic Motif Discovery
, 2011
"... We analyze the convergence rate of a popular Gibbs sampling method used for statistical discovery of gene regulatory binding motifs in DNA sequences. This sampler satisfies a very strong form of ergodicity (uniform). However, we show that, due to multimodality of the posterior distribution, the rate ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We analyze the convergence rate of a popular Gibbs sampling method used for statistical discovery of gene regulatory binding motifs in DNA sequences. This sampler satisfies a very strong form of ergodicity (uniform). However, we show that, due to multimodality of the posterior distribution, the rate of convergence often decreases exponentially as a function of the length of the DNA sequence. Specifically, we show that this occurs whenever there is more than one true repeating pattern in the data. In practice there are typically multiple, even numerous, such patterns in biological data, the goal being to detect the most wellconserved and frequentlyoccurring of these. Our findings match empirical results, in which the motifdiscovery Gibbs sampler has exhibited such poor convergence that it is used only for finding modes of the posterior distribution (candidate motifs) rather than for obtaining samples from that distribution. Ours appear to be the first meaningful bounds on the convergence rate of a Markov chain method for sampling from a multimodal posterior distribution, as a function of statistical quantities like the number of observations. Keywords: Gibbs sampler; DNA; slow mixing; spectral gap; binding motifs; multimodal.