Results 1  10
of
123
A Unifying Review of Linear Gaussian Models
, 1999
"... Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observa ..."
Abstract

Cited by 318 (18 self)
 Add to MetaCart
Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.
Probabilistic independence networks for hidden Markov probability models
, 1996
"... Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been develop ..."
Abstract

Cited by 181 (12 self)
 Add to MetaCart
Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a selfcontained review of the basic principles of PINs. It is shown that the wellknown forwardbackward (FB) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach.
Variational learning for switching statespace models
 Neural Computation
, 1998
"... We introduce a new statistical model for time series which iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time series models  hidden Ma ..."
Abstract

Cited by 160 (6 self)
 Add to MetaCart
We introduce a new statistical model for time series which iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time series models  hidden Markov models and linear dynamical systems  and is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs et al., 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact Expectation Maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log likelihood and makes use of both the forwardbackward recursions for hidden Markov models and the Kalman lter recursions for linear dynamical systems. We tested the algorithm both on artificial data sets and on a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching statespace models.
MEBN: A Language for FirstOrder Bayesian Knowledge Bases
"... Although classical firstorder logic is the de facto standard logical foundation for artificial intelligence, the lack of a builtin, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and m ..."
Abstract

Cited by 57 (21 self)
 Add to MetaCart
(Show Context)
Although classical firstorder logic is the de facto standard logical foundation for artificial intelligence, the lack of a builtin, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents MultiEntity Bayesian Networks (MEBN), a firstorder language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical firstorder theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable firstorder theory.
MEBN: A Logic for OpenWorld Probabilistic Reasoning
 Research Paper
, 2004
"... Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most wellunderstood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, generalpurpose languages are beginning to emerge for which ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
(Show Context)
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most wellunderstood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, generalpurpose languages are beginning to emerge for which the fundamental logical basis is probability. However, such languages have lacked a logical foundation that fully integrates classical firstorder logic with probability theory. This paper presents such an integrated logical foundation. A formal specification is presented for multientity Bayesian networks (MEBN), a knowledge representation language based on directed graphical probability models. A proof is given that a probability distribution over interpretations of any consistent, finitely axiomatizable firstorder theory can be defined using MEBN. A semantics based on random variables provides a logically coherent foundation for open world reasoning and a means of analyzing tradeoffs between accuracy and computation cost. Furthermore, the underlying Bayesian logic is inherently open, having the ability to absorb new facts about the world, incorporate them into existing theories, and/or modify theories in the light of evidence. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. The results of this paper provide a logical foundation for the rapidly evolving literature on firstorder Bayesian knowledge representation, and point the way toward Bayesian languages suitable for generalpurpose knowledge representation and computing. Because firstorder Bayesian logic contains classical firstorder logic as a deterministic subset, it is a natural candidate as a universal representation for integrating domain ontologies expressed in languages based on classical firstorder logic or subsets thereof.
DiscreteTime, DiscreteValued Observable Operator Models: A Tutorial
, 1998
"... This tutorial gives a basic yet rigorous introduction to observable operator models (OOMs). OOMs are a recently discovered class of models of stochastic processes. They are mathematically simple in that they require only concepts from elementary linear algebra. The linear algebra nature gives rise t ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
This tutorial gives a basic yet rigorous introduction to observable operator models (OOMs). OOMs are a recently discovered class of models of stochastic processes. They are mathematically simple in that they require only concepts from elementary linear algebra. The linear algebra nature gives rise to an e#cient, consistent, unbiased, constructive learning procedure for estimating models from empirical data. The tutorial describes in detail the mathematical foundations and the practical use of OOMs for identifying and predicting discretetime, discretevalued processes, both for outputonly and inputoutput systems. key words: stochastic time series, system identification, observable operator models Zusammenfassung Dies Tutorial bietet eine grundliche Einfuhrung in observable operator Modelle (OOMs). OOMs sind eine kurzlich entdeckte Klasse von Modellen stochastischer Prozesse. Sie sind mit den Mitteln der elementaren linearen Algebra darzustellen. Die Einfachheit der Darstellung fuhrt...
Kalman Filtering Using Pairwise Gaussian Models
 IN PROCEEDINGS OF THE ICASSP, HONGKONG, APRIL 610 2003
, 2003
"... An important problem in signal processing consists in recursively estimating an unobservable process x = {xn }n#IN from an observed process y = {yn }n#IN . This is done classically in the framework of Hidden Markov Models (HMM). In the linear Gaussian case, the classical recursive solution is giv ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
An important problem in signal processing consists in recursively estimating an unobservable process x = {xn }n#IN from an observed process y = {yn }n#IN . This is done classically in the framework of Hidden Markov Models (HMM). In the linear Gaussian case, the classical recursive solution is given by the wellknown Kalman filter. In this paper, we consider Pairwise Gaussian Models by assuming that the pair (x, y) is Markovian and Gaussian. We show that this model is strictly more general than the HMM, and yet still enables Kalmanlike filtering.
Optimal Sensor Scheduling for Hidden Markov Model State Estimation
 International Journal of Control
, 2001
"... Consider the Hidden Markov model where the realization of a single Markov chain is observed by a number of noisy sensors. The sensor scheduling problem for the resulting Hidden Markov model is as follows: Design an optimal algorithm for selecting at each time instant, one of the many sensors to pro ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Consider the Hidden Markov model where the realization of a single Markov chain is observed by a number of noisy sensors. The sensor scheduling problem for the resulting Hidden Markov model is as follows: Design an optimal algorithm for selecting at each time instant, one of the many sensors to provide the next measurement. Each measurementhas an associatedmeasurementcost. The problem is to select an optimal measurement scheduling policy, so as to minimize a cost function of estimation errors and measurement costs. The problem of determining the optimal measurement policy is solved via stochastic dynamic programming. Numerical results are presented. 1.
Methods and techniques of complex systems science: An overview
, 2003
"... In this chapter, I review the main methods and techniques of complex systems science. As a ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
In this chapter, I review the main methods and techniques of complex systems science. As a
Decentralized Dynamic Spectrum Access for Cognitive Radios: Cooperative Design of a Noncooperative Game
 IEEE TRANSACTIONS ON MOBILE COMUPTING
, 2009
"... We consider dynamic spectrum access among cognitive radios from an adaptive, game theoretic learning perspective. Spectrumagile cognitive radios compete for channels temporarily vacated by licensed primary users in order to satisfy their own demands while minimizing interference. For both slowly v ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
We consider dynamic spectrum access among cognitive radios from an adaptive, game theoretic learning perspective. Spectrumagile cognitive radios compete for channels temporarily vacated by licensed primary users in order to satisfy their own demands while minimizing interference. For both slowly varying primary user activity and slowly varying statistics of “fast” primary user activity, we apply an adaptive regret based learning procedure which tracks the set of correlated equilibria of the game, treated as a distributed stochastic approximation. This procedure is shown to perform very well compared with other similar adaptive algorithms. We also estimate channel contention for a simple CSMA channel sharing scheme.