Results 11  20
of
23
Exploiting parameter domain knowledge for learning in Bayesian networks
 Carnegie Mellon University
, 2005
"... implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
implied, of any sponsoring institution, the U.S. government or any other entity.
Efficient parameter learning in bayesian networks from incomplete data
 Knowledge Media Institute, The Open University, Milton
, 1997
"... ..."
Learning Bayesian Networks from Incomplete Data: An Efficient Method for Generating Approximate Predictive Distributions Abstract
"... We present an efficient method for learning Bayesian network models and parameters from incomplete data. With our approach an approximation is obtained of the predictive distribution. By way of this distribution any learning algorithm that works for complete data can be easily adapted to work for in ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We present an efficient method for learning Bayesian network models and parameters from incomplete data. With our approach an approximation is obtained of the predictive distribution. By way of this distribution any learning algorithm that works for complete data can be easily adapted to work for incomplete data as well. Our method exploits the dependence relations between the variables explicitly given by the Bayesian network model to predict missing values. Based on strength of influence and predictive quality, a subset of those predictor variables is selected, from which an approximate predictive distribution is generated by taking the observed part of the data into consideration. The approximate predictive distribution is obtained by traversing the data sample only twice and no iteration is required. Therefore our algorithm is more efficient than iterative algorithms such as EM and SEM. Our experiments show that the method performs well both for parameter learning and model learning compared to EM and SEM. 1
ExpectationPropagation for the Generative Aspect Model
 In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence
, 2002
"... The generative aspect model is an extension of the multinomial model for text that allows word probabilities to vary stochastically across documents. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The generative aspect model is an extension of the multinomial model for text that allows word probabilities to vary stochastically across documents.
Bayesian Network Approach to Computerized Adaptive Testing
"... For the personalized learning, a good testing method, which can effectively estimate a learner’s proficiency, is required. In this paper, we propose a novel testing method, Bayesian networkbased approach to Computerized Adaptive Testing (CAT). Our novel approach can estimate proficiency of the exam ..."
Abstract
 Add to MetaCart
(Show Context)
For the personalized learning, a good testing method, which can effectively estimate a learner’s proficiency, is required. In this paper, we propose a novel testing method, Bayesian networkbased approach to Computerized Adaptive Testing (CAT). Our novel approach can estimate proficiency of the examinee effectively and efficiently because it reflects complicated relationships between all items and their categories, and can estimate detailed proficiency
CONTENTS Causal Networks Learning Acausal Networks Learning Influence Diagrams Learning CausalNetwork Parameters Learning CausalNetwork Structure
"... Bayesian methods have been developed for learning Bayesian networks from data. Most of this work has concentrated on Bayesian networks interpreted as a representation of probabilistic conditional independence without considering causation. Other researchers have shown that having a causal interpreta ..."
Abstract
 Add to MetaCart
(Show Context)
Bayesian methods have been developed for learning Bayesian networks from data. Most of this work has concentrated on Bayesian networks interpreted as a representation of probabilistic conditional independence without considering causation. Other researchers have shown that having a causal interpretation can be important, because it allows us to predict the effects of interventions in a domain. In this chapter, we extend Bayesian methods for learning acausal
ON THE PROBABILISTIC ORDERING OF CONSTRAINTS
, 2001
"... In this paper, we will discuss a number of aspects related to the probabilistic orderings of constraints. The developed model is referred to as Probabilistic Ranking Optimisation (PRO). The treatment of ranking in Optimality Theory is taken as a starting point, but the emphasis here will be on the m ..."
Abstract
 Add to MetaCart
In this paper, we will discuss a number of aspects related to the probabilistic orderings of constraints. The developed model is referred to as Probabilistic Ranking Optimisation (PRO). The treatment of ranking in Optimality Theory is taken as a starting point, but the emphasis here will be on the mathematical properties of ranking solutions and the connection with adaptive and sequential learning. While in Optimality Theory constraints are (linearly) ranked along a onedimensional continuum, the current exposition is not constrained to a onedimensional continuum, but can be applied in a more general setting. A relation between the learnability of constraints on the one hand and aspects of graph theory on the other is established. The resulting PRO model enables to understand the modelling power of ranking in terms of the number and structure of probability properties that have to be fulfilled. 1
Structure and Parameter Learning for Causal Independence and Causal Interaction Models
"... We begin by discussing causal independence models and generalize these models to causal interaction models. Causal interaction models are models that have independent mechanisms where mechanisms can have several causes. In addition to introducing several particular types of causal interaction models ..."
Abstract
 Add to MetaCart
(Show Context)
We begin by discussing causal independence models and generalize these models to causal interaction models. Causal interaction models are models that have independent mechanisms where mechanisms can have several causes. In addition to introducing several particular types of causal interaction models, we show howwe can apply the Bayesian approach to learning causal interaction models obtaining approximate posterior distributions for the models and obtain MAP and ML estimates for the parameters. We illustrate the approach with a simulation study of learning model posteriors.
An Algorithm for Inferences in a Polytree with Heterogeneous Conditional Distributions
"... This paper describes a general scheme for accomodating different types of conditional distributions in a Bayesian network. The algorithm is based on the polytree algorithm for Bayesian network inference, in which “messages” (probability distributions and likelihood functions) are computed. The poste ..."
Abstract
 Add to MetaCart
(Show Context)
This paper describes a general scheme for accomodating different types of conditional distributions in a Bayesian network. The algorithm is based on the polytree algorithm for Bayesian network inference, in which “messages” (probability distributions and likelihood functions) are computed. The posterior for a given variable depends on the messages sent to it by its parents and children, if any. In this scheme, an exact result is computed if such a result is known for the incoming messages, otherwise an approximation is computed, which is a mixture of Gaussians. The approximation may then be propagated to other variables. Approximations for likelihood functions (λmessages) are not computed; the approximation step is put off until the likelihood function is combined with a probability distribution — this avoids certain numerical difficulties. In contrast with standard polytree algorithms, which can only accomodate distributions of a few types at most, this heterogeneous polytree algorithm can, in principle, handle any kind of continuous or discrete conditional distribution. With standard algorithms, it is necessary to construct an approximate Bayesian network, in which one then computes exact results; the heterogeneous polytree algorithm, on the other hand, computes approximate results in the original Bayesian network. The most important advantage of the new algorithm is that the Bayesian network can be directly represented using the conditional distributions most appropriate for the problem domain.
Unified Prediction and Diagnosis in Engineering Systems by means of Distributed Belief
, 1999
"... Dissertation directed by Professor Jan F. Kreider This dissertation describes the theory, implementation, and application of a class of graphical probability models, called distributed belief networks, for the purposes of prediction, diagnosis, and calculation of the value of information in engineer ..."
Abstract
 Add to MetaCart
(Show Context)
Dissertation directed by Professor Jan F. Kreider This dissertation describes the theory, implementation, and application of a class of graphical probability models, called distributed belief networks, for the purposes of prediction, diagnosis, and calculation of the value of information in engineering systems. Probability models have the very desirable property that several useful operations can be stated as the computation of probability distributions; prediction and diagnosis correspond to the calculation of certain posterior distributions, and the value of information can be interpreted as the calculation of an average decrease of entropy of a posterior distribution. These operations, and others, are different ways of looking at a single model — there is no need for separate models for different operations. A belief network, for the purposes of this dissertation, is a directed graph associated with a set of conditional probability distributions. Each node in the graph corresponds to a variable in a probability model. A distributed belief network is a belief network implemented on multiple processors. Posterior distributions for variables in the belief network are computed by a messagepassing algorithm called the polytree algorithm. For