Results 1  10
of
44
Unscented Filtering and Nonlinear Estimation
 Proceedings of the IEEE
, 2004
"... The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the ..."
Abstract

Cited by 253 (2 self)
 Add to MetaCart
The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization. To overcome this limitation, the unscented transformation (UT) was developed as a method to propagate mean and covariance information through nonlinear transformations. It is more accurate, easier to implement, and uses the same order of calculations as linearization. This paper reviews the motivation, development, use, and implications of the UT. Keywords—Estimation, Kalman filtering, nonlinear systems, target tracking. I.
Thin Junction Tree Filters for Simultaneous Localization and Mapping
 In Intl. Joint Conf. on Artificial Intelligence (IJCAI
, 2003
"... Simultaneous Localization and Mapping (SLAM) is a fundamental problem in mobile robotics: while a robot navigates in an unknown environment, it must incrementally build a map of its surroundings and localize itself within that map. Traditional approaches to the problem are based upon Kalman filters, ..."
Abstract

Cited by 126 (1 self)
 Add to MetaCart
Simultaneous Localization and Mapping (SLAM) is a fundamental problem in mobile robotics: while a robot navigates in an unknown environment, it must incrementally build a map of its surroundings and localize itself within that map. Traditional approaches to the problem are based upon Kalman filters, but suffer from complexity issues: the size of the belief state and the time complexity of the filtering operation grow quadratically in the size of the map. This paper presents a filtering technique that maintains a tractable approximation of the filtered belief state as a thin junction tree. The junction tree grows under measurement and motion updates and is periodically "thinned" to remain tractable via efficient maximum likelihood projections. When applied to the SLAM problem, these thin junction tree filters have a linearspace belief state representation, and use a lineartime filtering operation. Further approximation can yield a constanttime filtering operation, at the expense of delaying the incorporation of observations into the majority of the map. Experiments on a suite of SLAM problems validate the approach.
PAMPAS: RealValued Graphical Models for Computer Vision
, 2003
"... Probabilistic models have been adopted for many computer vision applications, however inference in highdimensional spaces remains problematic. As the statespace of a model grows, the dependencies between the dimensions lead to an exponential growth in computation when performing inference. Many comm ..."
Abstract

Cited by 91 (3 self)
 Add to MetaCart
Probabilistic models have been adopted for many computer vision applications, however inference in highdimensional spaces remains problematic. As the statespace of a model grows, the dependencies between the dimensions lead to an exponential growth in computation when performing inference. Many common computer vision problems naturally map onto the graphical model framework; the representation is a graph where each node contains a portion of the statespace and there is an edge between two nodes only if they are not independent conditional on the other nodes in the graph. When this graph is sparsely connected, belief propagation algorithms can turn an exponential inference computation into one which is linear in the size of the graph. However belief propagation is only applicable when the variables in the nodes are discretevalued or jointly represented by a single multivariate Gaussian distribution, and this rules out many computer vision applications.
A datadriven approach to quantifying natural human motion
 ACM Trans. Graph
, 2005
"... Figure 1: Examples from our test set of motions. The left two images are natural (motion capture data). The two images to the right are unnatural (badly edited and incompletely cleaned motion). Joints that are marked in redyellow were detected as having unnatural motion. Frames for these images wer ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
Figure 1: Examples from our test set of motions. The left two images are natural (motion capture data). The two images to the right are unnatural (badly edited and incompletely cleaned motion). Joints that are marked in redyellow were detected as having unnatural motion. Frames for these images were selected by the method presented in [Assa et al. 2005]. In this paper, we investigate whether it is possible to develop a measure that quantifies the naturalness of human motion (as defined by a large database). Such a measure might prove useful in verifying that a motion editing operation had not destroyed the naturalness of a motion capture clip or that a synthetic motion transition was within the space of those seen in natural human motion. We explore the performance of mixture of Gaussians (MoG), hidden Markov models (HMM), and switching linear dynamic systems (SLDS) on this problem. We use each of these statistical models alone and as part of an ensemble of smaller statistical models. We also implement a Naive Bayes (NB) model for a baseline comparison. We test these techniques on motion capture data held out from a database, keyframed motions, edited motions, motions with noise added, and synthetic motion transitions. We present the results as receiver operating characteristic (ROC) curves and compare the results to the judgments made by subjects in a user study.
Expectation correction for smoothed inference in switching linear dynamical systems
 Journal of Machine Learning Research
"... We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman ‘Filter ’ and the more general case of switch transitions dependent on the continuous latent state. The ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman ‘Filter ’ and the more general case of switch transitions dependent on the continuous latent state. The method improves on the standard Kim smoothing approach by dispensing with one of the key approximations, thus making fuller use of the available future information. Whilst the central assumption required is projection to a mixture of Gaussians, we show that an additional conditional independence assumption results in a simpler but accurate alternative. Our method consists of a single Forward and Backward Pass and is reminiscent of the standard smoothing ‘correction’ recursions in the simpler linear dynamical system. The method is numerically stable and compares favourably against alternative approximations, both in cases where a single mixture component provides a good posterior approximation, and where a multimodal approximation is required.
Extracting dynamical structure embedded in neural activity
 in Advances in Neural Information Processing Systems 18
, 2006
"... Spiking activity from neurophysiological experiments often exhibits dynamics beyond that driven by external stimulation, presumably reflecting the extensive recurrence of neural circuitry. Characterizing these dynamics may reveal important features of neural computation, particularly during internal ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Spiking activity from neurophysiological experiments often exhibits dynamics beyond that driven by external stimulation, presumably reflecting the extensive recurrence of neural circuitry. Characterizing these dynamics may reveal important features of neural computation, particularly during internallydriven cognitive operations. For example, the activity of premotor cortex (PMd) neurons during an instructed delay period separating movementtarget specification and a movementinitiation cue is believed to be involved in motor planning. We show that the dynamics underlying this activity can be captured by a lowdimensional nonlinear dynamical systems model, with underlying recurrent structure and stochastic pointprocess output. We present and validate latent variable methods that simultaneously estimate the system parameters and the trialbytrial dynamical trajectories. These methods are applied to characterize the dynamics in PMd data recorded from a chronicallyimplanted 96electrode array while monkeys perform delayedreach tasks. 1
Approximating probability density functions with mixtures of truncated exponentials
 Proceedings of the Tenth Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems (IPMU04), 2004
, 2006
"... Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte Carlo methods for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated by an MTE potential, which can always be marginalized in closed form. This allows propagat ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte Carlo methods for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated by an MTE potential, which can always be marginalized in closed form. This allows propagation to be done exactly using the ShenoyShafer architecture for computing marginals, with no restrictions on the construction of a join tree. This paper presents MTE potentials that approximate standard PDF’s and applications of these potentials for solving inference problems in hybrid Bayesian networks. These approximations will extend the types of inference problems that can be modeled with Bayesian networks, as demonstrated using three examples.
Modeling transportation routines using hybrid dynamic mixed networks
 in Proc. UAI
, 2005
"... This paper describes a general framework called Hybrid Dynamic Mixed Networks (HDMNs) which are Hybrid Dynamic Bayesian Networks that allow representation of discrete deterministic information in the form of constraints. We propose approximate inference algorithms that integrate and adjust well know ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
This paper describes a general framework called Hybrid Dynamic Mixed Networks (HDMNs) which are Hybrid Dynamic Bayesian Networks that allow representation of discrete deterministic information in the form of constraints. We propose approximate inference algorithms that integrate and adjust well known algorithmic principles such as Generalized Belief Propagation, RaoBlackwellised Particle Filtering and Constraint Propagation to address the complexity of modeling and reasoning in HDMNs. We use this framework to model a person’s travel activity over time and to predict destination and routes given the current location. We present a preliminary empirical evaluation demonstrating the effectiveness of our modeling framework and algorithms using several variants of the activity model. 1
Sample Propagation
 Advances in Neural Information Processing System
, 2003
"... RaoBlackwellization is an approximation technique for probabilistic inference that flexibly combines exact inference with sampling. It is useful in models where conditioning on some of the variables leaves a simpler inference problem that can be solved tractably. This paper presents Sample Pro ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
RaoBlackwellization is an approximation technique for probabilistic inference that flexibly combines exact inference with sampling. It is useful in models where conditioning on some of the variables leaves a simpler inference problem that can be solved tractably. This paper presents Sample Propagation, an efficient implementation of RaoBlackwellized approximate inference for a large class of models. Sample Propagation tightly integrates sampling with message passing in a junction tree, and is named for its simple, appealing structure: it walks the clusters of a junction tree, sampling some of the current cluster's variables and then passing a message to one of its neighbors. We discuss the application of Sample Propagation to conditional Gaussian inference problems such as switching linear dynamical systems.
M.: Expectation propagation for inference in nonlinear dynamical models with Poisson observations
 In: Proc IEEE Nonlinear Statistical Signal Processing Workshop. (2006
"... Neural activity unfolding over time can be modeled using nonlinear dynamical systems [1]. As neurons communicate via discrete action potentials, their activity can be characterized by the numbers of events occurring within short predefined timebins (spike counts). Because the observed data are hig ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Neural activity unfolding over time can be modeled using nonlinear dynamical systems [1]. As neurons communicate via discrete action potentials, their activity can be characterized by the numbers of events occurring within short predefined timebins (spike counts). Because the observed data are highdimensional vectors of nonnegative integers, nonlinear state estimation from spike counts presents a unique set of challenges. In this paper, we describe why the expectation propagation (EP) framework is particularly wellsuited to this problem. We then demonstrate ways to improve the robustness and accuracy of Gaussian quadraturebased EP. Compared to the unscented Kalman smoother, we find that EPbased state estimators provide more accurate state estimates. 1.