Results 1  10
of
51
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 563 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
A note on the stochastic realization problem
 Hemisphere Publishing Corporation
, 1976
"... Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizati ..."
Abstract

Cited by 98 (23 self)
 Add to MetaCart
Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizations are characterized and classified with respect to deterministic as well as probabilistic properties. It is shown that only certain realizations (internal stochastic realizations) can be determined from the given output process y. All others (external stochastic realizations)require that the probability space be extended with an exogeneous random component. A complete characterization of the sets of internal and external stochastic realizations is provided. It is shown that the state process of any internal stochastic realization can be expressed in terms of two steadystate KalmanBucy filters, one evolving forward in time over the infinite past and one backward over the infinite future. An algorithm is presented which generates families Of external realizations defined on the same probability space and totally ordered with respect to state covariances. 1. Introduction. One
Subspace Algorithms for the Stochastic Identification Problem
, 1993
"... In this paper, we derive a new subspace algorithm to consistently identify stochastic state space models from given output data without forming the covariance matrix and using only semiinfinite block Hankel matrices. The algorithm is based on the concept of principal angles and directions. We descr ..."
Abstract

Cited by 75 (14 self)
 Add to MetaCart
In this paper, we derive a new subspace algorithm to consistently identify stochastic state space models from given output data without forming the covariance matrix and using only semiinfinite block Hankel matrices. The algorithm is based on the concept of principal angles and directions. We describe how they can be calculated with QR and Quotient Singular Value Decomposition. We also provide an interpretation of the principal directions as states of a nonsteady state Kalman filter bank. Key Words Principal angles and directions, QR and quotient singular value decomposition, Kalman filter, Riccati difference equation, stochastic balancing, stochastic realization. 1 Introduction Let y k 2 ! l ; k = 0; 1; : : : ; K be a data sequence that is generated by the following system : x k+1 = Ax k + w k (1) y k = Cx k + v k (2) where x k 2 ! n is a state vector. The vector sequence w k 2 ! n is a process noise while the vector sequence v k 2 ! l is a measurement noise. They are bo...
Switching Kalman Filters
, 1998
"... We show how many different variants of Switching Kalman Filter models can be represented in a unified way, leading to a single, generalpurpose inference algorithm. We then show how to find approximate Maximum Likelihood Estimates of the parameters using the EM algorithm, extending previous results ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
We show how many different variants of Switching Kalman Filter models can be represented in a unified way, leading to a single, generalpurpose inference algorithm. We then show how to find approximate Maximum Likelihood Estimates of the parameters using the EM algorithm, extending previous results on learning using EM in the nonswitching case [DRO93, GH96a] and in the switching, but fully observed, case [Ham90]. 1 Introduction Dynamical systems are often assumed to be linear and subject to Gaussian noise. This model, called the Linear Dynamical System (LDS) model, can be defined as x t = A t x t\Gamma1 + v t y t = C t x t +w t where x t is the hidden state variable at time t, y t is the observation at time t, and v t ¸ N(0; Q t ) and w t ¸ N(0; R t ) are independent Gaussian noise sources. Typically the parameters of the model \Theta = f(A t ; C t ; Q t ; R t )g are assumed to be timeinvariant, so that they can be estimated from data using e.g., EM [GH96a]. One of the main adva...
Bayesian Forecasting
, 1996
"... rapolation techniques, especially exponential smoothing and exponentially weighted moving average methods ([20, 71]). Developments of smoothing and discounting techniques in stock control and production planning areas led to formalisms in terms of linear, statespace models for time series with time ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
rapolation techniques, especially exponential smoothing and exponentially weighted moving average methods ([20, 71]). Developments of smoothing and discounting techniques in stock control and production planning areas led to formalisms in terms of linear, statespace models for time series with timevarying trends and seasonal patterns, and eventually to the associated Bayesian formalism of methods of inference and prediction. From the early 1960s, practical Bayesian forecasting systems in this context involved the combination of formal time series models and historical data analysis together with methods for subjective intervention and forecast monitoring, so that complete forecasting systems, rather than just routine and automatic data analysis and extrapolation, were in use at that time ([19, 22]). Methods developed in those early days are still in use now in some companies in sales forecasting and stock control areas. There have been major developments in models and methods since t
A Convex Optimization Approach to the Rational Covariance Extension Problem
 SIAM J. Control Optim
, 1999
"... In this paper we present a convex optimization problem for solving the rational covariance extension problem. Given a partial covariance sequence and the desired zeros of the modeling filter, the poles are uniquely determined from the unique minimum of the corresponding optimization problem. In this ..."
Abstract

Cited by 50 (24 self)
 Add to MetaCart
In this paper we present a convex optimization problem for solving the rational covariance extension problem. Given a partial covariance sequence and the desired zeros of the modeling filter, the poles are uniquely determined from the unique minimum of the corresponding optimization problem. In this way we obtain an algorithm for solving the covariance extension problem, as well as a constructive proof of Georgiou's seminal existence result and his conjecture, a stronger version of which we have resolved in [7]. K3 words. rational covariance extension, partial stochastic realization, trigonometric moment problem, spectral estimation, speech processing, stochastic modeling AMS subject classifications.30ERR 60G35, 62M15, 93A30,93E0 1.
Equations of motion from a data series
 Complex Systems
, 1987
"... Abstract. Temporal pattern learning, control and prediction, and chaotic data analysis share a common problem: deducing optimal equations of motion from observations of timedependent behavior. Each desires to obtain models of the physical world from limited information. We describe a method to reco ..."
Abstract

Cited by 41 (14 self)
 Add to MetaCart
Abstract. Temporal pattern learning, control and prediction, and chaotic data analysis share a common problem: deducing optimal equations of motion from observations of timedependent behavior. Each desires to obtain models of the physical world from limited information. We describe a method to reconstruct the deterministic portion of the equations of motion directly from a data series. These equations of motion represent a vast reduction of a chaotic data set’s observed complexity to a compact, algorithmic specification. This approach employs an informational measure of model optimality to guide searching through the space of dynamical systems. As corollary results, we indicate how to estimate the minimum embedding dimension, extrinsic noise level, metric entropy, and Lyapunov spectrum. Numerical and experimental applications demonstrate the method’s feasibility and limitations. Extensions to estimating parametrized families of dynamical systems from bifurcation data and to spatial pattern evolution are presented. Applications to predicting chaotic data and the design of forecasting, learning, and control systems, are discussed. 1.
Canonical Correlation Analysis, Approximate Covariance Extension, and Identification of Stationary Time Series
 Automatica
, 1996
"... In this paper we analyze a class of statespace identification algorithms for timeseries, based on canonical correlation analysis, in the light of recent results on stochastic systems theory. In principle, these so called "subspace methods" can be described as covariance estimation followed by stoc ..."
Abstract

Cited by 36 (17 self)
 Add to MetaCart
In this paper we analyze a class of statespace identification algorithms for timeseries, based on canonical correlation analysis, in the light of recent results on stochastic systems theory. In principle, these so called "subspace methods" can be described as covariance estimation followed by stochastic realization. The methods o#er the major advantage of converting the nonlinear parameter estimation phase in traditional ARMA models identification into the solution of a Riccati equation but introduce at the same time some nontrivial mathematical problems related to positivity. The reason for this is that an essential part of the problem is equivalent to the wellknown rational covariance extension problem. Therefore the usual deterministic arguments based on factorization of a Hankel matrix are not valid for generic data, something that is habitually overlooked in the literature. We demonstrate that there is no guarantee that several popular identification procedures based on the same principle will not fail to produce a positive extension, unless some rather stringent assumptions are made which, in general, are not explicitly reported. In this paper the statistical problem of stochastic modeling from estimated covariances is phrased in the geometric language of stochastic realization theory. We review the basic ideas of stochastic realization theory in the context of identification, discuss the concept of stochastic balancing and of stochastic model reduction by principal subsystem truncation. The model reduction method of Desai and Pal, based on truncated balanced stochastic realizations, is partially justified, showing that the reduced system structure has a positive covariance sequence but is in general not balanced. As a byproduct of this analysis we obtain a t...
Automatic time series forecasting: The forecast package for R
 Journal of Statistical Software
, 2008
"... Automatic forecasts of large numbers of univariate time series are often needed in business and other contexts. We describe two automatic forecasting algorithms that have been implemented in the forecast package for R. The first is based on innovations state space models that underly exponential smo ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
Automatic forecasts of large numbers of univariate time series are often needed in business and other contexts. We describe two automatic forecasting algorithms that have been implemented in the forecast package for R. The first is based on innovations state space models that underly exponential smoothing methods. The second is a stepwise algorithm for forecasting with ARIMA models. The algorithms are applicable to both seasonal and nonseasonal data, and are compared and illustrated using four real time series. We also briefly describe some of the other functionality available in the forecast package.
From Finite Covariance Windows to Modeling Filters: A Convex Optimization Approach
 SIAM Review
, 2001
"... Thetrigonom5KO1 mrig tproblem is a classicalmass tproblem with numMO8S applications inm9M86VC5KS9 physics, and engineering. The rational covariance extensionproblem is a constrained version of thisproblem with the constraints arisingfrom the physical realizability of the corresponding solutions. Al ..."
Abstract

Cited by 27 (18 self)
 Add to MetaCart
Thetrigonom5KO1 mrig tproblem is a classicalmass tproblem with numMO8S applications inm9M86VC5KS9 physics, and engineering. The rational covariance extensionproblem is a constrained version of thisproblem with the constraints arisingfrom the physical realizability of the corresponding solutions. Although themeCM um entropym ethod gives one wellknown solution, in several applications a wider class of solutions is desired. In a semMVV paper, Georgiou derived an existence result for a broad class of m dels. In this paper, we review the history of thisproblem going back to Caratheodory, as well as applications to stochasticsystem and signal processing. In particular, we present a convex optimC5M87M problem for solving the rational covariance extensionproblem with degree constraint. Given a partial covariance sequence and the desired zeros of the shaping filter, the poles are uniquelydetermC98 from the uniquemiqu um of the corresponding optimi CK6M problem In this way we obtain analgorithm for solving the covariance extension problem as well as a constructive proof of Georgiou's existence result and his conjecture, a generalized version of which we have recently resolved usinggeomCM6K mom ds. We also survey recent related results on constrained NevanlinnaPick interpolation in the context of a variationalform ulation of the generalmner tproblem Key words. rational covariance extension, interpolation, partial stochastic realization, trigonomon,C mrig tproblem spectralestim699SV speech processing, stochasticm deling, general mner tproblem AMS subject classifications. 30E05, 42A15, 49N15, 60G35, 62M15, 65K10, 93A30, 93E12 PII. S0036144501392194 1.