Results 1  10
of
12
Learning Belief Networks from Data: An Information Theory Based Approach
 In Proceedings of the Sixth ACM International Conference on Information and Knowledge Management
"... This paper presents an efficient algorithm for learning Bayesian belief networks from databases. The algorithm takes a database as input and constructs the belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
This paper presents an efficient algorithm for learning Bayesian belief networks from databases. The algorithm takes a database as input and constructs the belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set that is large enough, this algorithm can generate a belief network very close to the underlying model, and at the same time, enjoys the time complexity of O N ( ) 4 on conditional independence (CI) tests. When the data set has a normal DAGFaithful (see Section 3.2) probability distribution, the algorithm guarantees that the structure of a perfect map [Pearl, 1988] of the underlying dependency model is generated. To evaluate this algorithm, we present the experimental results on three versions of the wellknown ALARM network database, which has 37 attributes and 10,000 records. The results show that this algorithm is accurate and efficient. The proof of correctness and the analysis of c...
An Algorithm for Bayesian Belief Network Construction from Data
 IN PROCEEDINGS OF AI & STAT’97
, 1997
"... This paper presents an efficient algorithm for constructing Bayesian belief networks from databases. The algorithm takes a database and an attributes ordering (i.e., the causal attributes of an attribute should appear earlier in the order) as input and constructs a belief network structure as output ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
This paper presents an efficient algorithm for constructing Bayesian belief networks from databases. The algorithm takes a database and an attributes ordering (i.e., the causal attributes of an attribute should appear earlier in the order) as input and constructs a belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set which is large enough and has a DAGIsomorphic probability distribution, this algorithm guarantees that the perfect map [1] of the underlying dependency model is generated, and at the same time, enjoys the time complexity of O N ( ) on conditional independence (CI) tests. To evaluate this algorithm, we present the experimental results on three versions of the wellknown ALARM network database, which has 37 attributes and 10,000 records. The correctness proof and the analysis of computational complexity are also presented. We also discuss the features of our work and relate it to previous works.
The State of Boosting
, 1999
"... In many problem domains, combining the predictions of several models often results in a model with improved predictive performance. Boosting is one such method that has shown great promise. On the applied side, empirical studies have shown that combining models using boosting methods produces more a ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
In many problem domains, combining the predictions of several models often results in a model with improved predictive performance. Boosting is one such method that has shown great promise. On the applied side, empirical studies have shown that combining models using boosting methods produces more accurate classification and regression models. These methods are extendible to the exponential family as well as proportional hazards regression models. This article shows that boosting, which is still new to statistics, is widely applicable. I will introduce boosting, discuss the current state of boosting, and show how these methods connect to more standard statistical practice.
On Test Selection Strategies for Belief Networks
, 1993
"... Decision making under uncertainty typically requires an iterative process of information acquisition. At each stage, the decision maker chooses the next best test (or tests) to perform, and reevaluates the possible decisions. Valueofinformation analyses provide a formal strategy for selecting the ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Decision making under uncertainty typically requires an iterative process of information acquisition. At each stage, the decision maker chooses the next best test (or tests) to perform, and reevaluates the possible decisions. Valueofinformation analyses provide a formal strategy for selecting the next test(s). However, the complete decisiontheoretic approach is impractical and researchers have sought approximations. In this paper, we present strategies for both myopic and limited nonmyopic (working with known test groups) test selection in the context of belief networks. We focus primarily on utilityfree test selection strategies. However, the methods have immediate application to the decisiontheoretic framework. 9.1 Introduction Graphical belief network researchers have developed powerful algorithms to propagate the effects of any piece of information to all the variables in the model in a manner analogous to forward chaining in rulebased expert systems (see, for example, Daw...
Cautious Propagation in Bayesian Networks
 Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence
, 1995
"... Consider the situation where some evidence e has been entered to a Bayesian network. When performing conflict analysis, sensitivity analysis, or when answering questions like "What if the finding on X had been y instead of x?", you need probabilities P (e 0 j h) where e 0 is a subset of e, and ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Consider the situation where some evidence e has been entered to a Bayesian network. When performing conflict analysis, sensitivity analysis, or when answering questions like "What if the finding on X had been y instead of x?", you need probabilities P (e 0 j h) where e 0 is a subset of e, and h is a configuration of a (possibly empty) set of variables. Cautious propagation is a modification of HUGIN propagation into a ShaferShenoylike architecture. It is less efficient than HUGIN propagation; however, it provides easy access to P (e 0 j h) for a great deal of relevant subsets e 0 . Keywords: Bayesian networks, propagation, fast retraction, sensitivity analysis. 1 Introduction As an example for motivating the introduction of yet another propagation method, consider the junction tree in Figure 1, with evidence e = fs; t; u; v; w; x; y; zg entered as indicated. Suppose you want to perform a conflict analysis (Jensen, Chamberlain, Nordahl & Jensen 1991). Then you first calcu...
Uniform knowledge representation for language processing
 in the b2 system. Journal of Natural Language Engineering
, 1997
"... We describe the natural language processing and knowledge representation components of B2, a collaborative system that allows medical students to practice their decisionmaking skills by considering a number of medical cases that di er from each other in a controlled manner. The underlying decision ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We describe the natural language processing and knowledge representation components of B2, a collaborative system that allows medical students to practice their decisionmaking skills by considering a number of medical cases that di er from each other in a controlled manner. The underlying decisionsupport model of B2 uses a Bayesian network that captures the results of prior clinical studies of abdominal pain. B2 generates storyproblems based on this model and supports natural language queries about the conclusions of the model and the reasoning behind them. B2 bene ts from having a single knowledge representation and reasoning component that acts as a blackboard for intertask communication and cooperation. All knowledge is represented using a propositional semantic network formalism, thereby providing a uniform representation to all components. The natural language component iscomposed of a generalized augmented transition network parser/grammar and a discourse analyzer for managing the natural language interactions. The knowlege representation component supports the natural language component by providing a uniform representation of the content and structure of the interaction, at the parser, discourse, and domain levels. This uniform representation allows distinct tasks, such as dialog management, domainspeci c reasoning, and metareasoning about the Bayesian network, to all use the same information source, without requiring mediation. This is important because there are queries, such asWhy?, whose interpretation and response requires information from each of these tasks. By contrast, traditional approaches treat each subtask as a \blackbox " with respect to other task components, and have a separate knowledge representation language for each. As a result, they have had much more di culty providing useful responses. 2 Susan W. McRoy, Susan M. Haller, and Syed S.Ali 1
BANTER: A Bayesian Network Tutoring Shell
, 1997
"... We present an educational tool for bringing the information contained in a Bayesian network to the end user in an easily intelligible form. The banter shell is designed to tutor users in evaluation of hypotheses and selection of optimal diagnostic procedures. banter can be used with any Bayesian ne ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present an educational tool for bringing the information contained in a Bayesian network to the end user in an easily intelligible form. The banter shell is designed to tutor users in evaluation of hypotheses and selection of optimal diagnostic procedures. banter can be used with any Bayesian network containing nodes that can be classified into hypotheses, observations, and diagnostic procedures. The system enables one to present various types of queries to the network, to test one's ablity to select optimal diagnostic procedures, and to request explanations. We describe the system's capabilities by illustrating how it functions with two structurally different network models of realworld medical problems. Keywords: Bayesian networks, computeraided instruction, explanation 1 Introduction In recent years Bayesian belief networks have become the representation of choice for building decisionmaking systems in domains characterized by uncertainty. The popularity of Bayesian networ...
Visual explanation of evidence in additive classifiers
 Proc. IAAI
, 2006
"... Machinelearned classifiers are important components of many data mining and knowledge discovery systems. In several application domains, an explanation of the classifier's reasoning is critical for the classifier’s acceptance by the enduser. We describe a framework, ExplainD, for explaining decisi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Machinelearned classifiers are important components of many data mining and knowledge discovery systems. In several application domains, an explanation of the classifier's reasoning is critical for the classifier’s acceptance by the enduser. We describe a framework, ExplainD, for explaining decisions made by classifiers that use additive evidence. ExplainD applies to many widely used classifiers, including linear discriminants and many additive models. We demonstrate our ExplainD framework using implementations of naïve Bayes, linear support vector machine, and logistic regression classifiers on example applications. ExplainD uses a simple graphical explanation of the classification process to provide visualizations of the classifier decisions, visualization of the evidence for those decisions, the capability to speculate on the effect of changes to the data, and the capability, wherever possible, to drill down and audit the source of the evidence. We demonstrate the effectiveness of ExplainD in the context of a deployed webbased system (Proteome Analyst) and using a downloadable Pythonbased implementation.
Visual Explanation of Evidence in Additive Classifiers
, 2006
"... Machinelearned classifiers are important components of many data mining and knowledge discovery systems. In several application domains, an explanation of the classifier's reasoning is critical for the classifier's acceptance by the enduser. We describe a framework, ExplainD, for explaining d ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Machinelearned classifiers are important components of many data mining and knowledge discovery systems. In several application domains, an explanation of the classifier's reasoning is critical for the classifier's acceptance by the enduser. We describe a framework, ExplainD, for explaining decisions made by classifiers that use additive evidence. ExplainD applies to many widely used classifiers, including linear discriminants and many additive models. We demonstrate our ExplainD framework using implementations of nave Bayes, linear support vector machine, and logistic regression classifiers on example applications. ExplainD uses a simple graphical explanation of the classification process to provide visualizations of the classifier decisions, visualization of the evidence for those decisions, the capability to speculate on the effect of changes to the data, and the capability, wherever possible, to drill down and audit the source of the evidence. We demonstrate the effectiveness of ExplainD in the context of a deployed webbased system (Proteome Analyst) and using a downloadable Pythonbased implementation.