Results 1  10
of
53
Sequential Monte Carlo Methods for Dynamic Systems
 Journal of the American Statistical Association
, 1998
"... A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ..."
Abstract

Cited by 474 (9 self)
 Add to MetaCart
A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ingredients: importance sampling and resampling, rejection sampling, and Markov chain iterations. We deliver a guideline on how they should be used and under what circumstance each method is most suitable. Through the analysis of differences and connections, we consolidate these methods into a generic algorithm by combining desirable features. In addition, we propose a general use of RaoBlackwellization to improve performances. Examples from econometrics and engineering are presented to demonstrate the importance of RaoBlackwellization and to compare different Monte Carlo procedures. Keywords: Blind deconvolution; Bootstrap filter; Gibbs sampling; Hidden Markov model; Kalman filter; Markov...
Mixture Kalman filters
, 2000
"... In treating dynamic systems,sequential Monte Carlo methods use discrete samples to represent a complicated probability distribution and use rejection sampling, importance sampling and weighted resampling to complete the online `filtering' task. We propose a special sequential Monte Carlo metho ..."
Abstract

Cited by 157 (5 self)
 Add to MetaCart
In treating dynamic systems,sequential Monte Carlo methods use discrete samples to represent a complicated probability distribution and use rejection sampling, importance sampling and weighted resampling to complete the online `filtering' task. We propose a special sequential Monte Carlo method,the mixture Kalman filter, which uses a random mixture of the Gaussian distributions to approximate a target distribution. It is designed for online estimation and prediction of conditional and partial conditional dynamic linear models,which are themselves a class of widely used nonlinear systems and also serve to approximate many others. Compared with a few available filtering methods including Monte Carlo methods,the gain in efficiency that is provided by the mixture Kalman filter can be very substantial. Another contribution of the paper is the formulation of many nonlinear systems into conditional or partial conditional linear form,to which the mixture Kalman filter can be applied. Examples in target tracking and digital communications are given to demonstrate the procedures proposed.
Convergence of Sequential Monte Carlo Methods
 SEQUENTIAL MONTE CARLO METHODS IN PRACTICE
, 2000
"... Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filter ..."
Abstract

Cited by 149 (11 self)
 Add to MetaCart
Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filtering methods) have appeared in the literature to solve this class of problems; see (Doucet, de Freitas & Gordon, 2001) for a survey. However, few of these methods have been proved to converge rigorously. The purpose of this paper is to address this issue. We present a general sequential Monte Carlo (SMC) method which includes most of the important features present in current SMC methods. This method generalizes and encompasses many recent algorithms. Under mild regularity conditions, we obtain rigorous convergence results for this general SMC method and therefore give theoretical backing for the validity of all the algorithms that can be obtained as particular cases of it.
Monte Carlo smoothing for nonlinear time series
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2004
"... We develop methods for performing smoothing computations in general statespace models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are pr ..."
Abstract

Cited by 95 (15 self)
 Add to MetaCart
We develop methods for performing smoothing computations in general statespace models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forwardfiltering backwardsmoothing procedure which can be viewed as the nonlinear, nonGaussian counterpart of standard Kalman filterbased simulation smoothers in the linear Gaussian case. Convergence in the meansquared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a timevarying autoregression and parameterised in terms of timevarying partial correlation coe#cients, comparing the results of our algorithm with those from a simple smoother based upon the filtered trajectories.
Sequential Monte Carlo Methods for Multiple Target Tracking and Data Fusion
 IEEE Trans. on Signal Processing
, 2002
"... Abstract—The classical particle filter deals with the estimation of one state process conditioned on a realization of one observation process. We extend it here to the estimation of multiple state processes given realizations of several kinds of observation processes. The new algorithm is used to tr ..."
Abstract

Cited by 81 (5 self)
 Add to MetaCart
Abstract—The classical particle filter deals with the estimation of one state process conditioned on a realization of one observation process. We extend it here to the estimation of multiple state processes given realizations of several kinds of observation processes. The new algorithm is used to track with success multiple targets in a bearingsonly context, whereas a JPDAF diverges. Making use of the ability of the particle filter to mix different types of observations, we then investigate how to join passive and active measurements for improved tracking. Index Terms—Bayesian estimation, bearingsonly tracking, Gibbs sampler, multiple receivers, multiple targets tracking,
Tracking Multiple Objects with Particle Filtering
, 2000
"... We address the problem of multitarget tracking encountered in many situations in signal or image processing. We consider stochastic dynamic systems detected by observation processes. The difficulty lies on the fact that the estimation of the states requires the assignment of the observations to the ..."
Abstract

Cited by 78 (4 self)
 Add to MetaCart
We address the problem of multitarget tracking encountered in many situations in signal or image processing. We consider stochastic dynamic systems detected by observation processes. The difficulty lies on the fact that the estimation of the states requires the assignment of the observations to the multiple targets. We propose an extension of the classical particle filter where the stochastic vector of assignment is estimated by a Gibbs sampler. This algorithm is used to estimate the trajectories of multiple targets from their noisy bearings, thus showing its ability to solve the data association problem. Moreover this algorithm is easily extended to multireceiver observations where the receivers can produce measurements of various nature with different frequencies.
Stability and Uniform Approximation of Nonlinear Filters Using the Hilbert Metric, and Application to Particle Filters
, 2002
"... this article, we use the approach based on the Hilbert metric to study the asymptotic behavior of the optimal filter, and to prove as in [9] the uniform convergence of several particle filters, such as the interacting particle filter (IPF) and some other original particle filters. A common assumptio ..."
Abstract

Cited by 60 (5 self)
 Add to MetaCart
this article, we use the approach based on the Hilbert metric to study the asymptotic behavior of the optimal filter, and to prove as in [9] the uniform convergence of several particle filters, such as the interacting particle filter (IPF) and some other original particle filters. A common assumption to prove stability results, see e.g. in [9, Theorem 2.4], is that the Markov transition kernels are mixing, which implies that the hidden state sequence is ergodic. Our results are obtained under the assumption that the nonnegative kernels describing the evolution of the unnormalized optimal filter, and incorporating simultaneously the Markov transition kernels and the likelihood functions, are mixing. This is a weaker assumption, see Proposition 3.9, which allows to consider some cases, similar to the case studied in [6], where the hidden state sequence is not ergodic, see Example 3.10. This point of view is further developped by Le Gland and Oudjane in [22] and by Oudjane and Rubenthaler in [28]. Our main contribution is to study also the stability of the optimal filter w.r.t. the model, when the local error is propagated by mixing kernels, and can be estimated in the Hilbert metric, in the total variation norm, or in a weaker distance suitable for random probability distributions. AMS 1991 subject classifications. Primary 93E11, 93E15, 62E25; secondary 60B10, 60J27, 62G07, 62G09, 62L10
Recursive Monte Carlo filters: Algorithms and theoretical analysis
, 2003
"... powerful tool to perform computations in general state space models. We discuss and compare the accept–reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept–rejec ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
powerful tool to perform computations in general state space models. We discuss and compare the accept–reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept–reject version, and we compare different resampling techniques. In a second part, we show laws of large numbers and a central limit theorem for these Monte Carlo filters by simple induction arguments that need only weak conditions. We also show that, under stronger conditions, the required sample size is independent of the length of the observed series. 1. State space and hidden Markov models. A general state space or hidden Markov model consists of an unobserved state sequence (Xt) and an observation sequence (Yt) with the following properties: State evolution: X0,X1,X2,... is a Markov chain with X0 ∼ a0(x)dµ(x) and XtXt−1 = xt−1 ∼ at(xt−1,x)dµ(x). Generation of observations: Conditionally on (Xt), the Yt’s are independent and Yt depends on Xt only with YtXt = xt ∼ bt(xt,y)dν(y). These models occur in a variety of applications. Linear state space models are equivalent to ARMA models (see, e.g., [16]) and have become popular Received January 2003; revised August 2004. AMS 2000 subject classifications. Primary 62M09; secondary 60G35, 60J22, 65C05. Key words and phrases. State space models, hidden Markov models, filtering and smoothing, particle filters, auxiliary variables, sampling importance resampling, central limit theorem. This is an electronic reprint of the original article published by the
Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime
 ANN. STATIST
, 2004
"... An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.
Smooth Particle Filters for Likelihood Evaluation and Maximisation
, 2002
"... In this paper, a method is introduced for approximating the likelihood for the unknown parameters of a state space model. The approximation converges to the true likelihood as the simulation size goes to infinity. In addition, the approximating likelihood is continuous as a function of the unknown p ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
In this paper, a method is introduced for approximating the likelihood for the unknown parameters of a state space model. The approximation converges to the true likelihood as the simulation size goes to infinity. In addition, the approximating likelihood is continuous as a function of the unknown parameters under rather general conditions. The approach advocated is fast, robust and avoids many of the pitfalls associated with current techniques based upon importance sampling. We assess the performance of the method by considering a linear state space model, comparing the results with the Kalman filter, which delivers the true likelihood. We also apply the method to a nonGaussian state space model, the Stochastic Volatility model, finding that the approach is efficient and effective. Applications to continuous time finance models are also considered. A result is established which allows the likelihood to be estimated quickly and efficiently using the output from the general auxilary particle filter.