Results 1 
7 of
7
A Bayesian method for the induction of probabilistic networks from data
 MACHINE LEARNING
, 1992
"... This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computerassisted hypothesis testing, automated scientific discovery, and automated construction of probabili ..."
Abstract

Cited by 1386 (32 self)
 Add to MetaCart
This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computerassisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Theory Refinement on Bayesian Networks
, 1991
"... Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced ..."
Abstract

Cited by 251 (5 self)
 Add to MetaCart
(Show Context)
Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation ", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode. 1 Introduction Theory refinement is the task of updating a domain theory in the light of...
On a Deficiency of the FCI Algorithm Learning Bayesian Networks from Data
"... Causally insufficient structures (models with latent or hidden variables, or with confounding etc.) of joint probability distributions have been subject of intense study not only in statistics, but also in various AI systems. In AI, belief networks, being representations of joint probability distr ..."
Abstract
 Add to MetaCart
(Show Context)
Causally insufficient structures (models with latent or hidden variables, or with confounding etc.) of joint probability distributions have been subject of intense study not only in statistics, but also in various AI systems. In AI, belief networks, being representations of joint probability distribution with an underlying directed acyclic graph structure, are paid special attention due to the fact that efficient reasoning (uncertainty propagation) methods have been developed for belief network structures. Algorithms have been therefore developed to acquire the belief network structure from data. As artifacts due to variable hiding negatively influence the performance of derived belief networks, models with latent variables have been studied and several algorithms for learning belief network structure under causal insufficiency have also been developed. Regrettably, some of them are known already to be erroneous (e.g. IC algorithm of [12]). This paper is devoted to another alg...
Education
"... Selected cognitive science methods *were used to modify existing test development procedures so that the modified procedures could in turn be used to improve the usefulness of job knowledge tests as a proxy for handson performance. A plangoal graph representation was used to capture the knowledge ..."
Abstract
 Add to MetaCart
Selected cognitive science methods *were used to modify existing test development procedures so that the modified procedures could in turn be used to improve the usefulness of job knowledge tests as a proxy for handson performance. A plangoal graph representation was used to capture the knowledge content and goal structure of the task of using a map, protractor, and compass for purposes of land navigation. Diagnosticity ratings were obtained from task experts to identify those content categories and procedures that would best discriminate among levels of examinee performance and specify the relative proportion of test questions to select. A probabilitybased inference network was used to score examinee responses and model a more complex pattern of relationships between knowledge and performance. A 100question knowledge test was developed and used to test the land navigation skills of 358 Marines.
Stable Specification Searches in Structural Equation Modeling Using a Multiobjective Evolutionary Algorithm
"... Abstract—Structural equation modelling (SEM) is a statistical technique for testing and estimating causal relations using a combination of statistical data and qualitative causal assumptions [1]– [3]. SEM allows for both confirmatory and exploratory modeling. In exploratory modeling one starts with ..."
Abstract
 Add to MetaCart
Abstract—Structural equation modelling (SEM) is a statistical technique for testing and estimating causal relations using a combination of statistical data and qualitative causal assumptions [1]– [3]. SEM allows for both confirmatory and exploratory modeling. In exploratory modeling one starts with the specification of a hypothesis, which is tested against measurements by measuring how well the model fits the data. In exploratory modeling one searches the model space without stating a prior hypothesis. Exploratory modeling has the benefit that no prior background knowledge is needed, but has the drawback that the model search space grows superexponentially since for n variables the number of SEM models is n4n. In the present paper we use an evolutionary algorithm approach to deal with the large search space in order to obtain good solutions within a reasonable amount of computation time. In addition, instead of dealing with one objective, we deal with multiple objectives to obtain more robust specifications. For this we employ the multiobjective evolutionary algorithm (MOEA) approach by using the NonDominated Sorting Genetic AlgorithmII (NSGAII). At the end, to confirm the stability of a specification, we employ a stability selection approach. We validate our approach on a data set which is generated from an artificial model. Experimental results show that our procedure allows for stable inference of a causal model. Keywords—Stable specification search, Structural Equation Modeling, Multiobjective optimization.
8a I4 ' E OF FUNDING /SPONSORING 8b OFFICE SYMBOL 9 PROC~jREMEN.r iNSTRUMENT;L NTiFCAT,&OU NUMBER
"... zIELD GROUP SUBGROUP Linear modelling, causal modelling, TETRAD II, automated 05 02 inference 19 ABSTRACT (Continue on reverse if necessary and identify by block number) Data analysis that merely fits an empirical covariance matrix or that finds the best least squares linear estimator of a variable ..."
Abstract
 Add to MetaCart
zIELD GROUP SUBGROUP Linear modelling, causal modelling, TETRAD II, automated 05 02 inference 19 ABSTRACT (Continue on reverse if necessary and identify by block number) Data analysis that merely fits an empirical covariance matrix or that finds the best least squares linear estimator of a variable is not a reliable guide to judgements about policy, which inevitably involve causal conclusions. We have developed and tested a computer program, TETRAD II, that accepts as input background knowledge about a causal structure, a covariance matrix, and a sample size, and outputs a set of suggested models compatible with the background knowledge and that explain the data. In tests on simulated data, TETRAD II was able to suggest a set of models that included the correct