Results 1  10
of
13
A tutorial on particle filtering and smoothing: fifteen years later
 OXFORD HANDBOOK OF NONLINEAR FILTERING
, 2011
"... Optimal estimation problems for nonlinear nonGaussian statespace models do not typically admit analytic solutions. Since their introduction in 1993, particle filtering methods have become a very popular class of algorithms to solve these estimation problems numerically in an online manner, i.e. r ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
Optimal estimation problems for nonlinear nonGaussian statespace models do not typically admit analytic solutions. Since their introduction in 1993, particle filtering methods have become a very popular class of algorithms to solve these estimation problems numerically in an online manner, i.e. recursively as observations become available, and are now routinely used in fields as diverse as computer vision, econometrics, robotics and navigation. The objective of this tutorial is to provide a complete, uptodate survey of this field as of 2008. Basic and advanced particle methods for filtering as well as smoothing are presented.
A sequential smoothing algorithm with linear computational cost
, 2008
"... In this paper we propose a new particle smoother that has a computational complexity of O(N), where N is the number of particles. This compares favourably with the O(N 2) computational cost of most smoothers and will result in faster rates of convergence for fixed computational cost. The new method ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
In this paper we propose a new particle smoother that has a computational complexity of O(N), where N is the number of particles. This compares favourably with the O(N 2) computational cost of most smoothers and will result in faster rates of convergence for fixed computational cost. The new method also overcomes some of the degeneracy problems we identify in many existing algorithms. Through simulation studies we show that substantial gains in efficiency are obtained for practical amounts of computational cost. It is shown both through these simulation studies, and on the analysis of an athletics data set, that our new method also substantially outperforms the simple FilterSmoother (the only other smoother with computational cost that is linear in the number of particles). 1
2008: Adaptive methods for sequential importance sampling with application to state space models
 Statistics and Computing
"... Abstract. In this paper we discuss new adaptive proposal strategies for sequential Monte Carlo algorithms—also known as particle filters—relying on criteria evaluating the quality of the proposed particles. The choice of the proposal distribution is a major concern and can dramatically influence the ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract. In this paper we discuss new adaptive proposal strategies for sequential Monte Carlo algorithms—also known as particle filters—relying on criteria evaluating the quality of the proposed particles. The choice of the proposal distribution is a major concern and can dramatically influence the quality of the estimates. Thus, we show how the longused coefficient of variation (suggested by Kong et al. (1994)) of the weights can be used for estimating the chisquare distance between the target and instrumental distributions of the auxiliary particle filter. As a byproduct of this analysis we obtain an auxiliary adjustment multiplier weight type for which this chisquare distance is minimal. Moreover, we establish an empirical estimate of linear complexity of the KullbackLeibler divergence between the involved distributions. Guided by these results, we discuss adaptive designing of the particle filter proposal distribution and illustrate the methods on a numerical example. 1.
Efficient Bayesian analysis of multiple changepoint models with dependence across segments
, 2010
"... We consider Bayesian analysis of a class of multiple changepoint models. While there are a variety of efficient ways to analyse these models if the parameters associated with each segment are independent, there are few general approaches for models where the parameters are dependent. Under the assu ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider Bayesian analysis of a class of multiple changepoint models. While there are a variety of efficient ways to analyse these models if the parameters associated with each segment are independent, there are few general approaches for models where the parameters are dependent. Under the assumption that the dependence is Markov, we propose an efficient online algorithm for sampling from an approximation to the posterior distribution of the number and position of the changepoints. In a simulation study, we show that the approximation introduced is negligible. We illustrate the power of our approach through fitting piecewise polynomial models to data, under a model which allows for either continuity or discontinuity of the underlying curve at each changepoint. This method is competitive with, or outperforms, other methods for inferring curves from noisy data; and uniquely it allows for inference of the locations of discontinuities in the underlying curve.
New probabilistic inference algorithms that harness the strengths of variational and Monte Carlo methods
, 2009
"... ..."
doi: 10.1098/rsfs.2011.0047 References
, 2011
"... Subject collections Email alerting service This article cites 35 articles, 6 of which can be accessed free ..."
Abstract
 Add to MetaCart
Subject collections Email alerting service This article cites 35 articles, 6 of which can be accessed free
4 5 6 7 8
"... Approximate Bayesian inference on the basis of summary statistics is wellsuited to complex problems for which the likelihood is either mathematically or computationally intractable. However the methods that use rejection suffer from the curse of dimensionality when the number of summary statistics i ..."
Abstract
 Add to MetaCart
Approximate Bayesian inference on the basis of summary statistics is wellsuited to complex problems for which the likelihood is either mathematically or computationally intractable. However the methods that use rejection suffer from the curse of dimensionality when the number of summary statistics is increased. Here we propose a machinelearning approach to the estimation of the posterior density by introducing two innovations. The new method fits a nonlinear conditional heteroscedastic regression of the parameter on the summary statistics, and then adaptively improves estimation using importance sampling. The new algorithm is compared to the stateoftheart approximate Bayesian methods, and achieves considerable reduction of the computational burden in two examples of inference in statistical genetics and in a queueing model.
Posterior Convergence and Model Estimation in Bayesian Changepoint Problems
, 808
"... Summary. We study the posterior distribution of the Bayesian multiple changepoint regression problem when the number and the locations of the changepoints are unknown. While it is relatively easy to apply the general theory to obtain the O(1 / √ n) rate up to some logarithmic factor, showing the ..."
Abstract
 Add to MetaCart
Summary. We study the posterior distribution of the Bayesian multiple changepoint regression problem when the number and the locations of the changepoints are unknown. While it is relatively easy to apply the general theory to obtain the O(1 / √ n) rate up to some logarithmic factor, showing the exact parametric rate of convergence of the posterior distribution requires additional work and assumptions. Additionally, we demonstrate the asymptotic normality of the segment levels under these assumptions. For inferences on the number of changepoints, we show that the Bayesian approach can produce a consistent posterior estimate. Finally, we argue that the pointwise posterior convergence property as demonstrated might have bad finite sample performance in that consistent posterior for model selection necessarily implies the maximal squared risk will be asymptotically larger than the optimal O(1 / √ n) rate. This is the Bayesian version of the same phenomenon that has been noted and studied by other authors.
INTERNATIONAL JOURNAL OF MATHEMATICAL MODELS AND METHODS IN APPLIED SCIENCES A Monte Carlo EM algorithm for discretely observed Diffusions, Jumpdiffusions and Lévydriven Stochastic Differential Equations
"... Abstract — Stochastic differential equations driven by standard Brownian motion(s) or Lévy processes are by far the most popular models in mathematical finance, but are also frequently used in engineering and science. A key feature of the class of models is that the parameters are easy to interpret ..."
Abstract
 Add to MetaCart
Abstract — Stochastic differential equations driven by standard Brownian motion(s) or Lévy processes are by far the most popular models in mathematical finance, but are also frequently used in engineering and science. A key feature of the class of models is that the parameters are easy to interpret for anyone working with ordinary differential equations, making connections between statistics and other scientific fields far smoother. We present an algorithm for computing the (historical probability measure) maximum likelihood estimate for parameters in diffusions, jumpdiffusions and Lévy processes. This is done by introducing a simple, yet computationally efficient, Monte Carlo Expectation Maximization algorithm. The smoothing distribution is computed using resampling, making the framework very general. The algorithm is evaluated on diffusions (CIR, Heston), jumpdiffusion (Bates) and Lévy processes (NIG, NIGCIR) on simulated data and market data from S & P 500 and VIX, all with satisfactory results.