Results 1  10
of
35
A family of algorithms for approximate Bayesian inference
, 2001
"... One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, "Expectation Propagation," ..."
Abstract

Cited by 358 (11 self)
 Add to MetaCart
One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, "Expectation Propagation," unifies and generalizes two previous techniques: assumeddensity filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with a simpler distribution, which is close in the sense of KLdivergence. Expectation Propagation exploits the best of both algorithms: the generality of assumeddensity filtering and the accuracy of loopy belief propagation. Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagati...
A Survey of Convergence Results on Particle Filtering Methods for Practitioners
, 2002
"... Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closedform expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the o ..."
Abstract

Cited by 223 (9 self)
 Add to MetaCart
Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closedform expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large set of Diracdelta masses (samples/particles) that evolve randomly in time according to the dynamics of the model and the observations. The particles are interacting; thus, classical limit theorems relying on statistically independent samples do not apply. In this paper, our aim is to present a survey of recent convergence results on this class of methods to make them accessible to practitioners.
Convergence of Sequential Monte Carlo Methods
 SEQUENTIAL MONTE CARLO METHODS IN PRACTICE
, 2000
"... Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filter ..."
Abstract

Cited by 218 (15 self)
 Add to MetaCart
Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filtering methods) have appeared in the literature to solve this class of problems; see (Doucet, de Freitas & Gordon, 2001) for a survey. However, few of these methods have been proved to converge rigorously. The purpose of this paper is to address this issue. We present a general sequential Monte Carlo (SMC) method which includes most of the important features present in current SMC methods. This method generalizes and encompasses many recent algorithms. Under mild regularity conditions, we obtain rigorous convergence results for this general SMC method and therefore give theoretical backing for the validity of all the algorithms that can be obtained as particular cases of it.
Particle Filters for State Estimation of Jump Markov Linear Systems
, 2001
"... Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulationbased algorithms called particle filter ..."
Abstract

Cited by 171 (14 self)
 Add to MetaCart
(Show Context)
Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulationbased algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixedlag smoothing problem. Our algorithms combine sequential importance sampling, a selection scheme, and Markov chain Monte Carlo methods. They use several variance reduction methods to make the most of the statistical structure of JMLS. Computer
Monte Carlo smoothing for nonlinear time series
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2004
"... We develop methods for performing smoothing computations in general statespace models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are pr ..."
Abstract

Cited by 146 (17 self)
 Add to MetaCart
We develop methods for performing smoothing computations in general statespace models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forwardfiltering backwardsmoothing procedure which can be viewed as the nonlinear, nonGaussian counterpart of standard Kalman filterbased simulation smoothers in the linear Gaussian case. Convergence in the meansquared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a timevarying autoregression and parameterised in terms of timevarying partial correlation coe#cients, comparing the results of our algorithm with those from a simple smoother based upon the filtered trajectories.
Particle Filters for State Space Models With the Presence of Static Parameters
, 2002
"... In this paper particle filters for dynamic state space models handling unknown static parameters are discussed. The approach is based on marginalizing the static parameters out of the posterior distribution such that only the state vector needs to be considered. Such a marginalization can always be ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
In this paper particle filters for dynamic state space models handling unknown static parameters are discussed. The approach is based on marginalizing the static parameters out of the posterior distribution such that only the state vector needs to be considered. Such a marginalization can always be applied. However, realtime applications are only possible when the distribution of the unknown parameters given both observations and the hidden state vector depends on some lowdimensional sufficient statistics. Such sufficient statistics are present in many of the commonly used state space models. Marginalizing the static parameters avoids the problem of impoverishment which typically occur when static parameters are included as part of the state vector. The filters are tested on several different models, with promising results.
Sequential MCMC for Bayesian model selection
 IEEE Higher Order Statistics Workshop
, 1999
"... In this paper, we address the problem of sequential Bayesian model selection. This problem does not usually admit any closedform analytical solution. We propose here an original sequential simulationbased method to solve the associated Bayesian computational problems. This method combines sequenti ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
(Show Context)
In this paper, we address the problem of sequential Bayesian model selection. This problem does not usually admit any closedform analytical solution. We propose here an original sequential simulationbased method to solve the associated Bayesian computational problems. This method combines sequential importance sampling, a resampling procedure and reversible jump MCMC moves. We describe a generic algorithm and then apply it to the problem of sequential Bayesian model order estimation of autoregressive (AR) time series observed in additive noise. 1
Improvement Strategies for Monte Carlo Particle Filters
 SEQUENTIAL MONTE CARLO METHODS IN PRACTICE
, 2000
"... ..."
Sequential Monte Carlo Inference of Internal Delays in Nonstationary Communication Networks
, 2001
"... Online, spatially localized information about internal network performance can greatly assist dynamic routing algorithms and traffic transmission protocols. However, it is impractical to measure network traffic at all points in the network. A promising alternative is to measure only at the edge ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
Online, spatially localized information about internal network performance can greatly assist dynamic routing algorithms and traffic transmission protocols. However, it is impractical to measure network traffic at all points in the network. A promising alternative is to measure only at the edge of the network and infer internal behavior from these measurements. In this paper we concentrate on the estimation and localization of internal delays based on endtoend delay measurements from a source to receivers. We propose a sequential Monte Carlo (SMC) procedure capable of tracking nonstationary network behavior and estimating timevarying, internal delay characteristics. Simulation experiments demonstrate the performance of the SMC approach. 1 Introduction In largescale networks, endsystems cannot rely on the network itself to cooperate in characterizing its own behavior. This has prompted several groups to investigate methods for inferring internal network behavior based on...