Results 11 
14 of
14
Learning Bayesian Networks from Incomplete Data: An Efficient Method for Generating Approximate Predictive Distributions Abstract
"... We present an efficient method for learning Bayesian network models and parameters from incomplete data. With our approach an approximation is obtained of the predictive distribution. By way of this distribution any learning algorithm that works for complete data can be easily adapted to work for in ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We present an efficient method for learning Bayesian network models and parameters from incomplete data. With our approach an approximation is obtained of the predictive distribution. By way of this distribution any learning algorithm that works for complete data can be easily adapted to work for incomplete data as well. Our method exploits the dependence relations between the variables explicitly given by the Bayesian network model to predict missing values. Based on strength of influence and predictive quality, a subset of those predictor variables is selected, from which an approximate predictive distribution is generated by taking the observed part of the data into consideration. The approximate predictive distribution is obtained by traversing the data sample only twice and no iteration is required. Therefore our algorithm is more efficient than iterative algorithms such as EM and SEM. Our experiments show that the method performs well both for parameter learning and model learning compared to EM and SEM. 1
ExpectationPropagation for the Generative Aspect Model
 In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence
, 2002
"... The generative aspect model is an extension of the multinomial model for text that allows word probabilities to vary stochastically across documents. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The generative aspect model is an extension of the multinomial model for text that allows word probabilities to vary stochastically across documents.
A family of algorithms for approximate Bayesian inference
, 2001
"... One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, "Expectation Propagation," unifies an ..."
Abstract
 Add to MetaCart
One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, "Expectation Propagation," unifies and generalizes two previous techniques: assumeddensity filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with a simpler distribution, which is close in the sense of KLdivergence. Expectation Propagation exploits the best of both algorithms: the generality of assumeddensity filtering and the accuracy of loopy belief propagation. Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagati...
CONTENTS Causal Networks Learning Acausal Networks Learning Influence Diagrams Learning CausalNetwork Parameters Learning CausalNetwork Structure
"... Bayesian methods have been developed for learning Bayesian networks from data. Most of this work has concentrated on Bayesian networks interpreted as a representation of probabilistic conditional independence without considering causation. Other researchers have shown that having a causal interpreta ..."
Abstract
 Add to MetaCart
Bayesian methods have been developed for learning Bayesian networks from data. Most of this work has concentrated on Bayesian networks interpreted as a representation of probabilistic conditional independence without considering causation. Other researchers have shown that having a causal interpretation can be important, because it allows us to predict the effects of interventions in a domain. In this chapter, we extend Bayesian methods for learning acausal