Results 1  10
of
576
Factor Graphs and the SumProduct Algorithm
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple computational rule, t ..."
Abstract

Cited by 1163 (67 self)
 Add to MetaCart
A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple computational rule, the sumproduct algorithm operates in factor graphs to computeeither exactly or approximatelyvarious marginal functions by distributed messagepassing in the graph. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sumproduct algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative "turbo" decoding algorithm, Pearl's belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform algorithms.
CONDENSATION  conditional density propagation for visual tracking
 International Journal of Computer Vision
, 1998
"... The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimodal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses "factored sampling", previously applied to the ..."
Abstract

Cited by 1123 (12 self)
 Add to MetaCart
The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimodal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses "factored sampling", previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near realtime. Contents 1 Tracking curves in clutter 2 2 Discretetime propagation of state density 3 3 Factored sampling 6 4 The Condensation algorithm 8 5 Stochastic dynamical models for curve motion 10 6 Observation model 13 7 Applying the Condensation algorithm to videostreams 17 8 Conclusions 26 A Nonline...
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 563 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
The effect upon channel capacity in wireless communications of perfect and imperfect knowledge of the channel
 IEEE Trans. Inf. Theory
, 2000
"... Abstract—We present a model for timevarying communication singleaccess and multipleaccess channels without feedback. We consider the difference between mutual information when the receiver knows the channel perfectly and mutual information when the receiver only has an estimate of the channel. We ..."
Abstract

Cited by 191 (4 self)
 Add to MetaCart
Abstract—We present a model for timevarying communication singleaccess and multipleaccess channels without feedback. We consider the difference between mutual information when the receiver knows the channel perfectly and mutual information when the receiver only has an estimate of the channel. We relate the variance of the channel measurement error at the receiver to upper and lower bounds for this difference in mutual information. We illustrate the use of our bounds on a channel modeled by a Gauss–Markov process, measured by a pilot tone. We relate the rate of time variation of the channel to the loss in mutual information due to imperfect knowledge of the measured channel. Index Terms—Channel uncertainty, multipleaccess channels, mutual information, timevarying channels, wireless communications. I.
A Multiple Hypothesis Approach to Figure Tracking
, 1999
"... This paper describes a probabilistic multiplehypothesis framework for tracking highly articulated objects. In this framework, the probability density of the tracker state is represented as a set of modes with piecewise Gaussians characterizing the neighborhood around these modes. The temporal evolu ..."
Abstract

Cited by 185 (9 self)
 Add to MetaCart
This paper describes a probabilistic multiplehypothesis framework for tracking highly articulated objects. In this framework, the probability density of the tracker state is represented as a set of modes with piecewise Gaussians characterizing the neighborhood around these modes. The temporal evolution of the probability density is achieved through sampling from the prior distribution, followed by local optimization of the sample positions to obtain updated modes. This method of generating hypotheses from statespace search does not require the use of discrete features unlike classical multiplehypothesis tracking. The parametric form of the model is suited for highdimensional statespaces which cannot be efficiently modeled using nonparametric approaches. Results are shown for tracking Fred Astaire in a movie dance sequence.
Parameter estimation for linear dynamical systems
, 1996
"... Linear systems have been used extensively in engineering to model and control the behavior of dynamical systems. In this note, we present the Expectation Maximization (EM) algorithm for estimating the parameters of linear systems (Shumway and Stoffer, 1982). We also point out the relationship betwee ..."
Abstract

Cited by 156 (7 self)
 Add to MetaCart
Linear systems have been used extensively in engineering to model and control the behavior of dynamical systems. In this note, we present the Expectation Maximization (EM) algorithm for estimating the parameters of linear systems (Shumway and Stoffer, 1982). We also point out the relationship between linear dynamical systems, factor analysis, and hidden Markov models.
Monitoring and Early Warning for Internet Worms
 In Proceedings of 10th ACM Conference on Computer and Communications Security (CCS’03
, 2003
"... After the Code Red incident in 2001 and the SQL Slammer in January 2003, it is clear that a simple selfpropagating worm can quickly spread across the Internet, infects most vulnerable computers before people can take e#ective countermeasures. The fast spreading nature of worms calls for a worm moni ..."
Abstract

Cited by 152 (18 self)
 Add to MetaCart
After the Code Red incident in 2001 and the SQL Slammer in January 2003, it is clear that a simple selfpropagating worm can quickly spread across the Internet, infects most vulnerable computers before people can take e#ective countermeasures. The fast spreading nature of worms calls for a worm monitoring and early warning system. In this paper, we propose e#ective algorithms for early detection of the presence of a worm and the corresponding monitoring system. Based on epidemic model and observation data from the monitoring system, by using the idea of "detecting the trend, not the rate" of monitored illegitimated scan tra#c, we propose to use a Kalman filter to detect a worm's propagation at its early stage in realtime. In addition, we can effectively predict the overall vulnerable population size, and correct the bias in the observed number of infected hosts. Our simulation experiments for Code Red and SQL Slammer show that with observation data from a small fraction of IP addresses, we can detect the presence of a worm when it infects only 1% to 2% of the vulnerable computers on the Internet.
Variational learning for switching statespace models
 Neural Computation
, 1998
"... We introduce a new statistical model for time series which iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time series models  hidden Ma ..."
Abstract

Cited by 141 (6 self)
 Add to MetaCart
We introduce a new statistical model for time series which iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time series models  hidden Markov models and linear dynamical systems  and is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs et al., 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact Expectation Maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log likelihood and makes use of both the forwardbackward recursions for hidden Markov models and the Kalman lter recursions for linear dynamical systems. We tested the algorithm both on artificial data sets and on a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching statespace models.
A Survey of Convergence Results on Particle Filtering Methods for Practitioners
, 2002
"... Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closedform expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the o ..."
Abstract

Cited by 132 (4 self)
 Add to MetaCart
Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closedform expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large set of Diracdelta masses (samples/particles) that evolve randomly in time according to the dynamics of the model and the observations. The particles are interacting; thus, classical limit theorems relying on statistically independent samples do not apply. In this paper, our aim is to present a survey of recent convergence results on this class of methods to make them accessible to practitioners.
Optimal motion and structure estimation
 IEEE Trans. Pattern Anal. Mach. Intell
, 1993
"... This paper studies optimal estimation for motion and structure from point correspondences. (1) A study of the characteristics of thc problem provides insight into the need for optimal estimation. (2) Methods have been developed for optimal estimation with known or unknown noise distribution. The sim ..."
Abstract

Cited by 130 (5 self)
 Add to MetaCart
This paper studies optimal estimation for motion and structure from point correspondences. (1) A study of the characteristics of thc problem provides insight into the need for optimal estimation. (2) Methods have been developed for optimal estimation with known or unknown noise distribution. The simulations showed that the optimal estimations achieve remarkable improvement over the preliminary estimates given by the linear algorithm. (3) An approach to estimating errors in the optimized solution is presented. (4) The performance of the algorithm is compared with a theoretical lower bound CramCrRao bound. Simulations show that the actual errors have essentially reached the bound. (5) A batch leastsquares technique (LevenbergMarquardt) and a sequential leastsquares technique (iterated extended Kalman filtering) are analyzed and compared. The analysis and experiments show that, in general, a batch technique will perform better than a sequential technique for any nonlinear problems. Recursive batch processing technique is proposed for nonlinear problems that require recursive estimation. 1.