Results 1  10
of
34
An Improved Particle Filter for Nonlinear Problems
, 2004
"... The Kalman filter provides an effective solution to the linearGaussian filtering problem. However, where there is nonlinearity, either in the model specification or the observation process, other methods are required. We consider methods known generically as particle filters, which include the c ..."
Abstract

Cited by 156 (8 self)
 Add to MetaCart
The Kalman filter provides an effective solution to the linearGaussian filtering problem. However, where there is nonlinearity, either in the model specification or the observation process, other methods are required. We consider methods known generically as particle filters, which include the condensation algorithm and the Bayesian bootstrap or sampling importance resampling (SIR) filter. These filters
Gaussian particle filtering
 IEEE Transactions on Signal Processing
, 2003
"... Abstract—Sequential Bayesian estimation for nonlinear dynamic statespace models involves recursive estimation of filtering and predictive distributions of unobserved time varying signals based on noisy observations. This paper introduces a new filter called the Gaussian particle filter1. It is base ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
Abstract—Sequential Bayesian estimation for nonlinear dynamic statespace models involves recursive estimation of filtering and predictive distributions of unobserved time varying signals based on noisy observations. This paper introduces a new filter called the Gaussian particle filter1. It is based on the particle filtering concept, and it approximates the posterior distributions by single Gaussians, similar to Gaussian filters like the extended Kalman filter and its variants. It is shown that under the Gaussianity assumption, the Gaussian particle filter is asymptotically optimal in the number of particles and, hence, has muchimproved performance and versatility over other Gaussian filters, especially when nontrivial nonlinearities are present. Simulation results are presented to demonstrate the versatility and improved performance of the Gaussian particle filter over conventional Gaussian filters and the lower complexity than known particle filters. The use of the Gaussian particle filter as a building block of more complex filters is addressed in a companion paper. Index Terms—Dynamic state space models, extended Kalman filter, Gaussian mixture, Gaussian mixture filter, Gaussian particle filter, Gaussian sum filter, Gaussian sum particle filter, Monte Carlo filters, nonlinear nonGaussian stochastic systems, particle filters, sequential Bayesian estimation, sequential sampling methods, unscented Kalman filter. I.
Recurrent neural networks and robust time series prediction
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1994
"... We propose a robust learning algorithm and apply it to recurrent neural networks. This algorithm is based on filtering outliers from the data and then estimating parameters from the filtered data. The filtering removes outliers from both the target function and the inputs of the neural network. The ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
We propose a robust learning algorithm and apply it to recurrent neural networks. This algorithm is based on filtering outliers from the data and then estimating parameters from the filtered data. The filtering removes outliers from both the target function and the inputs of the neural network. The filtering is soff in that some outliers are neither completely rejected nor accepted. To show the need for robust recurrent networks, we compare the predictive ability of least squares estimated recurrent networks on synthetic data and on the Puget Power Electric Demand time series. These investigations result in a class of recurrent neural networks, NARMA(p, q), which show advantages over feedforward neural networks for time series with a moving average component. Conventional least squares methods of fitting NARMA(p,q) neural network models are shown to suffer a lack of robustness towards outliers. This sensitivity to outliers is demonstrated on both the synthetic and real data sets. Filtering the Puget Power Electric Demand time series is shown to automatically remove the outliers due to holidays. Neural networks trained on filtered data are then shown to give better predictions than neural networks trained on unfiltered time series.
SigmaPoint Kalman Filters for Probabilistic Inference in Dynamic StateSpace Models
 In Proceedings of the Workshop on Advances in Machine Learning
, 2003
"... Probabilistic inference is the problem of estimating the hidden states of a system in an optimal and consistent fashion given a set of noisy or incomplete observations. The optimal solution to this problem is given by the recursive Bayesian estimation algorithm which recursively updates the post ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
Probabilistic inference is the problem of estimating the hidden states of a system in an optimal and consistent fashion given a set of noisy or incomplete observations. The optimal solution to this problem is given by the recursive Bayesian estimation algorithm which recursively updates the posterior density of the system state as new observations arrive online.
Building Robust Simulationbased Filters for Evolving Data Sets
, 1999
"... this paper we will focus on an alternative class of filters in which theoretical distributions on the state space are approximated by simulated random measures. The first goal in filter design is to produce a compact description of the posterior distribution of the state given all the observations a ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
this paper we will focus on an alternative class of filters in which theoretical distributions on the state space are approximated by simulated random measures. The first goal in filter design is to produce a compact description of the posterior distribution of the state given all the observations available so far. A basic requirement is that this description should be readily updated as new data become available. A mechanism has therefore to be devised which enables the approximating random measure to evolve and adapt. 3 SIMULATION BASED FILTERS Simulation based filters have a long history in the engineering literature, dating back to the work of Handschin and Mayne (1969); Handschin (1970); Akashi and Kumamoto (1977). Doucet (1998) provides a comprehensive review of the material. Since the Kalman filter is essentially a Bayesian update formula, the theory of Bayesian time series analysis is directly relevant (West and Harrison, 1997). We take as our starting point the filter developed by Gordon (1993); Gordon et al. (1993). The essence of the method is contained in a paper by Rubin (1988) who proposed the Sampling Importance Resampling (SIR) algorithm for obtaining samples from a complex posterior distribution without recourse to MCMC. In the simple nondynamic case described by Rubin (1988), the method consists of sampling n observations from the prior distribution, attaching weights to the sampled points according to their likelihood, and then sampling with replacement from this weighted discrete distribution. As n ! 1, the resulting set of values then approximates a sample from the required posterior (Smith and Gelfand, 1992). In the dynamic version, proposed by Gordon et al. (1993), the SIR algorithm is applied repeatedly as new data are acquired. One can think of...
Adaptive joint detection and decoding in flatfading channels via mixture Kalman filtering
 IEEE Trans. Inf. Theory
, 2000
"... Abstract—A novel adaptive Bayesian receiver for signal detection and decoding in fading channels with known channel statistics is developed; it is based on the sequential Monte Carlo methodology that recently emerged in the field of statistics. The basic idea is to treat the transmitted signals as “ ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Abstract—A novel adaptive Bayesian receiver for signal detection and decoding in fading channels with known channel statistics is developed; it is based on the sequential Monte Carlo methodology that recently emerged in the field of statistics. The basic idea is to treat the transmitted signals as “missing data ” and to sequentially impute multiple samples of them based on the observed signals. The imputed signal sequences, together with their importance weights, provide a way to approximate the Bayesian estimate of the transmitted signals and the channel states. Adaptive receiver algorithms for both uncoded and convolutionally coded systems are developed. The proposed techniques can easily handle the nonGaussian ambient channel noise. It is shown through simulations that the proposed sequential Monte Carlo receivers achieve nearbound performance in fading channels for both uncoded and coded systems, without the use of any training/pilot symbols or decision feedback. Moreover, the proposed receiver structure exhibits massive parallelism and is ideally suited for highspeed parallel implementation using the very large scale integration (VLSI) systolic array technology. Index Terms—Adaptive decoding, adaptive detection, coded system, flatfading channel, mixture Kalman filter, nonGaussian noise, sequential Monte Carlo methods. I.
The Horseshoe Estimator for Sparse Signals
, 2008
"... This paper proposes a new approach to sparsity called the horseshoe estimator. The horseshoe is a close cousin of other widely used Bayes rules arising from, for example, doubleexponential and Cauchy priors, in that it is a member of the same family of multivariate scale mixtures of normals. But th ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
This paper proposes a new approach to sparsity called the horseshoe estimator. The horseshoe is a close cousin of other widely used Bayes rules arising from, for example, doubleexponential and Cauchy priors, in that it is a member of the same family of multivariate scale mixtures of normals. But the horseshoe enjoys a number of advantages over existing approaches, including its robustness, its adaptivity to different sparsity patterns, and its analytical tractability. We prove two theorems that formally characterize both the horseshoe’s adeptness at large outlying signals, and its superefficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using a combination of real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers one would get by pursuing a full Bayesian modelaveraging approach using a discrete mixture prior to model signals and noise.
A Survey of Maneuvering Target Tracking  Part V: MultipleModel Methods
, 2003
"... ... without addressing the socalled measurementorigin uncertainty. Part I and Part II deal with target motion models. Part III covers measurement models and associated techniques. Part IV is concerned with tracking techniques that are based on decisions regarding target maneuvers. This part surv ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
... without addressing the socalled measurementorigin uncertainty. Part I and Part II deal with target motion models. Part III covers measurement models and associated techniques. Part IV is concerned with tracking techniques that are based on decisions regarding target maneuvers. This part surveys the multiplemodel methodsthe use of multiple models (and filters) simultaneouslywhich is the prevailing approach to maneuvering target tracking in the recent years. The survey is presented in a structured way, centered around three generations of algorithms: autonomous, cooperating, and variable structure. It emphasizes on the underpinning of each algorithm and covers various issues in algorithm design, application, and performance.
Architectures for Efficient Implementation of Particle Filters
, 2004
"... Particle filters are sequential Monte Carlo methods that are used in numerous problems where timevarying signals must be presented in real time and where the objective is to estimate various unknowns of the signal and/or detect events described by the signals. The standard solutions of such proble ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Particle filters are sequential Monte Carlo methods that are used in numerous problems where timevarying signals must be presented in real time and where the objective is to estimate various unknowns of the signal and/or detect events described by the signals. The standard solutions of such problems in many applications are based on the Kalman filters or extended Kalman filters. In situations when the problems are nonlinear or the noise that distorts the signals is nonGaussian, the Kalman filters provide a solution that may be far from optimal. Particle filters are an intriguing alternative to the Kalman filters due to their excellent performance in very di#cult problems including communications, signal processing, navigation, and computer vision. Hence, particle filters have been the focus of wide research recently and immense literature can be found on their theory. Most of these works recognize the complexity and computational intensity of these filters, but there has been no e#ort directed toward the implementation of these filters in hardware. The objective of this dissertation is to develop, design, and build e#cient hardware for particle filters, and thereby bring them closer to practical applications. The fact that particle filters outperform most of the traditional filtering methods in many complex practical scenarios, coupled with the challenges related to decreasing their computational complexity and improving realtime performance, makes this work worthwhile. The main
Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction
, 2010
"... We use Lévy processes to generate joint prior distributions for a location parameter β = (β1,..., βp) as p grows large. This approach, which generalizes normal scalemixture priors to an infinitedimensional setting, has a number of connections with mathematical finance and Bayesian nonparametrics. ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We use Lévy processes to generate joint prior distributions for a location parameter β = (β1,..., βp) as p grows large. This approach, which generalizes normal scalemixture priors to an infinitedimensional setting, has a number of connections with mathematical finance and Bayesian nonparametrics. We argue that it provides an intuitive framework for generating new regularization penalties and shrinkage rules; for performing asymptotic analysis on existing models; and for simplifying proofs of some classic results on normal scale mixtures.