Results 1  10
of
15
Control of Selective Perception Using Bayes Nets and Decision Theory
, 1993
"... A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the neces ..."
Abstract

Cited by 100 (1 self)
 Add to MetaCart
A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decisionmaking are central issues for selective vision, which takes advantage of prior knowledge of a domain's abstract and geometrical structure and models for the expected performance and cost of visual operators. The TEA1 selective vision system uses Bayes nets for representation and benefitcost analysis for control of visual and nonvisual actions. It is the highlevel control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA1 demonstrates that Bayes nets and decision theoretic techniques provide a general, reusable framework for constructi...
Current Approaches to Handling Imperfect Information in Data and Knowledge Bases
, 1996
"... This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.
A Logical Approach to Factoring Belief Networks
"... We have recently proposed a tractable logical form, known as deterministic, decomposable negation normal form (dDNNF). We have shown that dDNNF supports a number of logical operations in polynomial time, including clausal entailment, model counting, model enumeration, model minimization, and proba ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
We have recently proposed a tractable logical form, known as deterministic, decomposable negation normal form (dDNNF). We have shown that dDNNF supports a number of logical operations in polynomial time, including clausal entailment, model counting, model enumeration, model minimization, and probabilistic equivalence testing. In this paper, we discuss another major application of this logical form: the implementation of multilinear functions (of exponential size) using arithmetic circuits (that are not necessarily exponential). Specifically, we show that each multi–linear function can be encoded using a propositional theory, and that each dDNNF of the theory corresponds to an arithmetic circuit that implements the encoded multi–linear function. We discuss the application of these results to factoring belief networks, which can be viewed as multi–linear functions as has been shown recently. We discuss the merits of the proposed approach for factoring belief networks, and present experimental results showing how it can handle efficiently belief networks that are intractable to structure–based methods for probabilistic inference.
Efficient Learning of Selective Bayesian Network Classifiers
, 1995
"... In this paper, we present a computationally efficient method for inducing selective Bayesian network classifiers. Our approach is to use informationtheoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, informationtheoretic ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
In this paper, we present a computationally efficient method for inducing selective Bayesian network classifiers. Our approach is to use informationtheoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, informationtheoretic metrics that are extensions of metrics used extensively in decision tree learning, namely Quinlan's gain and gain ratio metrics and Mantaras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2AS), but at a significantly lower computational cost. We prove that the subsetselection phase of these informationbased algorithms has polynomial complexity as compared to the worstcase exponential time complexity of the corresponding phase in K2AS. We also compare the performance o...
A Comparison of Induction Algorithms for Selective and nonSelective Bayesian Classifiers
 Proceedings of the Twelfth International Conference on Machine Learning
, 1995
"... In this paper we present a novel induction algorithm for Bayesian networks. This selective Bayesian network classifier selects a subset of attributes that maximizes predictive accuracy prior to the network learning phase, thereby learning Bayesian networks with a bias for small, highpredictiveaccu ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
In this paper we present a novel induction algorithm for Bayesian networks. This selective Bayesian network classifier selects a subset of attributes that maximizes predictive accuracy prior to the network learning phase, thereby learning Bayesian networks with a bias for small, highpredictiveaccuracy networks. We compare the performance of this classifier with selective and nonselective naive Bayesian classifiers. We show that the selective Bayesian network classifier performs significantly better than both versions of the naive Bayesian classifier on almost all databases analyzed, and hence is an enhancement of the naive Bayesian classifier. Relative to the nonselective Bayesian network classifier, our selective Bayesian network classifier generates networks that are computationally simpler to evaluate and that display predictive accuracy comparable to that of Bayesian networks which model all features. 1 INTRODUCTION Bayesian induction methods have proven to be an important cla...
Learning Bayesian Networks Using Feature Selection
 in D. Fisher & H. Lenz, eds, Proceedings of the fifth International Workshop on Artificial Intelligence and Statistics, Ft. Lauderdale, FL
, 1995
"... This paper introduces a novel enhancement for learning Bayesian networks with a bias for small, highpredictiveaccuracy networks. The new approach selects a subset of features which maximizes predictive accuracy prior to the network learning phase. We examine explicitly the effects of two aspects o ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
This paper introduces a novel enhancement for learning Bayesian networks with a bias for small, highpredictiveaccuracy networks. The new approach selects a subset of features which maximizes predictive accuracy prior to the network learning phase. We examine explicitly the effects of two aspects of the algorithm, feature selection and node ordering. Our approach generates networks which are computationally simpler to evaluate and which display predictive accuracy comparable to that of Bayesian networks which model all attributes. 1 INTRODUCTION Bayesian networks are being increasingly recognized as an important representation for probabilistic reasoning. For many domains, the need to specify the probability distributions for a Bayesian network is considerable, and learning these probabilities from data using an algorithm like K2 [8] 1 could alleviate such specification difficulties. We describe an extension to the Bayesian network learning approaches introduced in K2. Rather than ...
Causal Probabilistic Networks With Both Discrete and Continuous Variables
, 1993
"... An extension of the expert system shell HUGIN to include continuous wriables, in the form of linear additive normally distributed variables, is presented. The ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
An extension of the expert system shell HUGIN to include continuous wriables, in the form of linear additive normally distributed variables, is presented. The
A Differential Semantics for Jointree Algorithms
"... Darwiche has recently proposed the representation of a belief network as a multivariate polynomial, allowing one to reduce probabilistic inference into a process of evaluating and dierentiating polynomials. ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
Darwiche has recently proposed the representation of a belief network as a multivariate polynomial, allowing one to reduce probabilistic inference into a process of evaluating and dierentiating polynomials.
Multilocus linkage analysis by blocked Gibbs sampling
 Statistics and Computing
, 2000
"... The problem of multilocus linkage analysis is expressed as a graphical model, making explicit a previously implicit connection, and recent developments in the field are described in this context. A novel application of blocked Gibbs sampling for Bayesian networks is developed to generate inheritance ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
The problem of multilocus linkage analysis is expressed as a graphical model, making explicit a previously implicit connection, and recent developments in the field are described in this context. A novel application of blocked Gibbs sampling for Bayesian networks is developed to generate inheritance matrices from an irreducible Markov chain. This is used as the basis for reconstruction of historical meiotic states and approximate calculation of the likelihood function for the location of an unmapped genetic trait. We believe this to be the only approach that currently makes fully informative multilocus linkage analysis possible on large extended pedigrees.
Two causal theories of counterfactual conditionals
 Cognitive Science
, 2010
"... Bayes nets are formal representations of causal systems that many psychologists have claimed as plausible mental representations. One purported advantage of Bayes nets is that they may provide a theory of counterfactual conditionals, such as If Calvin had been at the party, Miriam would have left ea ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Bayes nets are formal representations of causal systems that many psychologists have claimed as plausible mental representations. One purported advantage of Bayes nets is that they may provide a theory of counterfactual conditionals, such as If Calvin had been at the party, Miriam would have left early. This article compares two proposed Bayesnet theories as models of people’s understanding of counterfactuals. Experiments 13 show that neither theory makes correct predictions about backtracking counterfactuals (in which the event of the ifclause occurs after the event of the thenclause), and Experiment 4 shows the same is true of forward counterfactuals. An amended version of one of the approaches, however, can provide a more accurate account of these data. Counterfactuals, Forward and Backward / 3 1.