Results 1 
5 of
5
Learning Bayesian networks: The combination of knowledge and statistical data
 Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract

Cited by 913 (38 self)
 Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user’s prior knowledge. In particular, a user can express his knowledge—for the most part—as a single prior Bayesian network for the domain. 1
Optimization by learning and simulation of Bayesian and Gaussian networks
, 1999
"... Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses  organ ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses  organized in the same way as most evolutionary computation heuristics. In opposition to most evolutionary computation paradigms which consider the crossing and mutation operators as essential tools to generate new populations, EDA replaces those operators by the estimation and simulation of the joint probability distribution of the selected individuals. In this work, after making a review of the different approaches based on EDA for problems of combinatorial optimization as well as for problems of optimization in continuous domains, we propose new approaches based on the theory of probabilistic graphical models to solve problems in both domains. More precisely, we propose to adapt algorit...
Likelihoods and Parameter Priors for Bayesian Networks
, 1995
"... We develop simple methods for constructing likelihoods and parameter priors for learning about the parameters and structure of a Bayesian network. In particular, we introduce several assumptions that permit the construction of likelihoods and parameter priors for a large number of Bayesiannetwork s ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
We develop simple methods for constructing likelihoods and parameter priors for learning about the parameters and structure of a Bayesian network. In particular, we introduce several assumptions that permit the construction of likelihoods and parameter priors for a large number of Bayesiannetwork structures from a small set of assessments. The most notable assumption is that of likelihood equivalence, which says that data can not help to discriminate network structures that encode the same assertions of conditional independence. We describe the constructions that follow from these assumptions, and also present a method for directly computing the marginal likelihood of a random sample with no missing observations. Also, we show how these assumptions lead to a general framework for characterizing parameter priors of multivariate distributions. Keywords: Bayesian network, learning, likelihood equivalence, Dirichlet, normalWishart. 1 Introduction A Bayesian network is a graphical repres...
On compatible priors for Bayesian networks
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1992
"... AbstractGiven a Bayesian network of discrete random variables with a hyperDirichlet prior, a method is proposed for assigning Dirichlet priors to the conditional probabilities of structurally different networks. It defines a distance measure between priors which is to be minimized for the assignme ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
AbstractGiven a Bayesian network of discrete random variables with a hyperDirichlet prior, a method is proposed for assigning Dirichlet priors to the conditional probabilities of structurally different networks. It defines a distance measure between priors which is to be minimized for the assignment process. Intuitively one would expect that if two models ’ priors are to qualify as being ‘close ’ in some sense, then their posteriors should also be nearby after an observation. However one does not know in advance what will be observed next. Thus we are led to propose an expectation of KullbackLeibler distances over all possible next observations to define a measure of distance between priors. In conjunction with the additional assumptions of globaland local independence of the parameters [15], a number of theorems emerge which are usually taken as reasonable assumptions in the Bayesian network literature. The method is compared to the ’expansion and contraction ’ algorithm of [14], and is also contrasted with the results obtained in [7] who employ the additional assumption of likelihood equivalence which is not made here. A simple example illustrates the technique. Index TermsBayesian networks, Dirichlet priors, KullbackLeibler distance, local independence, global independence.
CAUSAL GRAPHICAL MODELS IN SYSTEMS GENETICS: A UNIFIED FRAMEWORK FOR JOINT INFERENCE OF CAUSAL NETWORK AND GENETIC ARCHITECTURE FOR CORRELATED PHENOTYPES
, 2009
"... Causal inference approaches in systems genetics exploit quantitative trait loci (QTL) genotypes to infer causal relationships among phenotypes. 12 13 The genetic architecture of each phenotype may be complex, and poorly estimated genetic architectures may compromise the inference of causal rela14 t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Causal inference approaches in systems genetics exploit quantitative trait loci (QTL) genotypes to infer causal relationships among phenotypes. 12 13 The genetic architecture of each phenotype may be complex, and poorly estimated genetic architectures may compromise the inference of causal rela14 tionships among phenotypes. Existing methods assume QTLs are known or inferred without regard to the phenotype network structure. In this paper we develop a QTLdriven phenotype network method (QTLnet) to jointly infer a causal phenotype network and associated genetic architecture for sets of correlated phenotypes. Randomization of alleles during meiosis and the unidirectional influence of genotype on phenotype allow the inference of QTLs causal to phenotypes. Causal relationships among phenotypes can be inferred using these QTL nodes, enabling us to distinguish among phenotype networks that would otherwise be distribution equivalent. We jointly model phenotypes and QTLs using homogeneous conditional Gaussian regression models, and we derive a graphical criterion for distribution equivalence. We validate the QTLnet approach in a simulation study. Finally, we illustrate with