Results 1  10
of
346
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 770 (3 self)
 Add to MetaCart
random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
limit performance of "Turbo Codes" codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something spe cial about the errorcorrecting code context, or does loopy propagation work as an ap proximate inference scheme
A family of algorithms for approximate Bayesian inference
, 2001
"... One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, "Expectation Propagation," ..."
Abstract

Cited by 366 (11 self)
 Add to MetaCart
One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, "Expectation Propagation
Probabilistic Inference Using Markov Chain Monte Carlo Methods
, 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces. R ..."
Abstract

Cited by 736 (24 self)
 Add to MetaCart
Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces
Tractable inference for complex stochastic processes
 In Proc. UAI
, 1998
"... The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief state—a probability distribution over the state of the process at a gi ..."
Abstract

Cited by 302 (14 self)
 Add to MetaCart
given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state
Decentralized Variational Bayesian Inference
"... This work presents a decentralized, approximate method for performing variational inference on a network of learning agents. The key difficulty with performing decentralized inference is that for most Bayesian models, the use of approximate inference algorithms is required, but such algorithms d ..."
Abstract
 Add to MetaCart
This work presents a decentralized, approximate method for performing variational inference on a network of learning agents. The key difficulty with performing decentralized inference is that for most Bayesian models, the use of approximate inference algorithms is required, but such algorithms
Distributed inference in Bayesian networks
 CYBERNETICS AND SYSTEMS
, 1994
"... Bayesian networks originated as a framework for distributed reasoning. In singlyconnected networks, there exists an elegant inference algorithm that can be implemented in parallel having a processor for every node. It can be extended to take profit of the ORgate, a model of interaction among ca ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Bayesian networks originated as a framework for distributed reasoning. In singlyconnected networks, there exists an elegant inference algorithm that can be implemented in parallel having a processor for every node. It can be extended to take profit of the ORgate, a model of interaction among
Approximate inference for infinite contingent Bayesian networks
 In Proc. 10th AISTATS
, 2005
"... In many practical problems—from tracking aircraft based on radar data to building a bibliographic database based on citation lists—we want to reason about an unbounded number of unseen objects with unknown relations among them. Bayesian networks, which define a fixed dependency structure on a finite ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
variables. Nevertheless, we give general conditions under which such a CBN defines a unique joint distribution over its variables. We also present a likelihood weighting algorithm that performs approximate inference in finite time per sampling step on any CBN that satisfies these conditions. 1
Continuous Time Bayesian Networks
"... In this paper we present a language for finite state continuous time Bayesian networks (CTBNs), which describe structured stochastic processes that evolve over continuous time. The state of the system is decomposed into a set of local variables whose values change over time. The dynamics of the syst ..."
Abstract

Cited by 98 (11 self)
 Add to MetaCart
In this paper we present a language for finite state continuous time Bayesian networks (CTBNs), which describe structured stochastic processes that evolve over continuous time. The state of the system is decomposed into a set of local variables whose values change over time. The dynamics
Bayesian Inference for Stochastic Kinetic Models Using a Diffusion Approximation
, 2005
"... This article is concerned with the Bayesian estimation of stochastic rate constants in the context of dynamic models of intracellular processes. The underlying discrete stochastic kinetic model is replaced by a diffusion approximation (or stochastic differential equation approach) where a white no ..."
Abstract

Cited by 60 (16 self)
 Add to MetaCart
This article is concerned with the Bayesian estimation of stochastic rate constants in the context of dynamic models of intracellular processes. The underlying discrete stochastic kinetic model is replaced by a diffusion approximation (or stochastic differential equation approach) where a white
Results 1  10
of
346