Results 1  10
of
22
Probabilistic Diagnosis Using a Reformulation of the INTERNIST1/QMR Knowledge Base  II. Evaluation of Diagnostic Performance
 Medicine
, 1990
"... We have developed a probabilistic reformulation of the Quick Medical Reference (QMR) system. In Part I of this twopart series, we described a twolevel, multiply connected beliefnetwork representation of the QMR knowledge base and a simulation algorithm to perform probabilistic inference on the re ..."
Abstract

Cited by 129 (10 self)
 Add to MetaCart
(Show Context)
We have developed a probabilistic reformulation of the Quick Medical Reference (QMR) system. In Part I of this twopart series, we described a twolevel, multiply connected beliefnetwork representation of the QMR knowledge base and a simulation algorithm to perform probabilistic inference on the reformulated knowledge base. In Part II of this series, we report on an evaluation of the probabilistic QMR, in which we compare the performance of QMR to that of our probabilistic system on cases abstracted from continuing medical education materials from Scientific American Medicine. In addition, we analyze empirically several components of the probabilistic model and simulation algorithm.
A Bayesian Analysis of Simulation Algorithms for Inference in Belief Networks,
 Networks
, 1993
"... A belief network is a graphical representation of the underlying probabilistic relationships in a complex system. Belief networks have been employed as a representation of uncertain relationships in computerbased diagnostic systems. These diagnostic systems provide assistance by assigning likeli ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
A belief network is a graphical representation of the underlying probabilistic relationships in a complex system. Belief networks have been employed as a representation of uncertain relationships in computerbased diagnostic systems. These diagnostic systems provide assistance by assigning likelihoods to alternative explanatory hypotheses in response to a set of findings or observations. Approximation algorithms have been used to compute likelihoods of hypotheses in large networks. We analyze the performance of leading Monte Carlo approximation algorithms for computing posterior probabilities in belief networks. The analysis differs from earlier attempts to characterize the behavior of simulation algorithms in our explicit use of Bayesian statistics: We update a probability distribution over target probabilities of interest with information from randomized trials. For real ffl; ffi ! 1 and for a probabilistic inference Pr[xje], the output of an inference approximation algorithm is an (ffl; ffi)estimate of Pr[xje] if with probability at least 1 \Gamma ffi the output is within relative error ffl of Pr[xje]. We construct a stopping rule for the number of simulations required by logic sampling, randomized approximation schemes, and likelihood weighting to provide (ffl; ffi)estimates of Pr[xje]. With probability 1 \Gamma ffi, the stopping rule is optimal in the sense that the algorithm performs the minimum number of required simulations. We prove that our stopping rules are insensitive to the prior probability distribution on Pr[xje].
Learning stochastic feedforward networks
, 1990
"... Introduction The work reported here began with the desire to find a network architecture that shared with Boltzmann machines [6, 1, 7] the capacity to learn arbitrary probability distributions over binary vectors, but that did not require the negative phase of Boltzmann machine learning. It was hypo ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Introduction The work reported here began with the desire to find a network architecture that shared with Boltzmann machines [6, 1, 7] the capacity to learn arbitrary probability distributions over binary vectors, but that did not require the negative phase of Boltzmann machine learning. It was hypothesized that eliminating the negative phase would improve learning performance. This goal was achieved by replacing the Boltzmann machine's symmetric connections with feedforward connections. In analogy with Boltzmann machines, the sigmoid function was used to compute the conditional probability of a unit being on from the weighted input from other units. Stochastic simulation of such a network is somewhat more complex than for a Boltzmann machine, but is still possible using local communication. Maximum likelihood, gradientascent learning can be done with a local Hebbtype rule.
Belief Updating by Enumerating HighProbability IndependenceBased Assignments
 IN: PROCEEDINGS OF THE 10TH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1994
"... Independencebased (IB) assignments to Bayesian belief networks were originally proposed as abductive explanations. IB assignments assign fewer variables in abductive explanations than do schemes assigning values to all evidentially supported variables. We use IB assignments to approximate marginal ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Independencebased (IB) assignments to Bayesian belief networks were originally proposed as abductive explanations. IB assignments assign fewer variables in abductive explanations than do schemes assigning values to all evidentially supported variables. We use IB assignments to approximate marginal probabilities in Bayesian belief networks. Recent work in belief updating for Bayes networks attempts to approximate posterior probabilities by finding a small number of the highest probability complete (or perhaps evidentially supported) assignments. Under certain assumptions, the probability mass in the union of these assignments is sufficient to obtain a good approximation. Such methods are especially useful for highlyconnected networks, where the maximum clique size or the cutset size make the standard algorithms intractable. Since IB assignments contain fewer assigned variables, the probability mass in each assignment is greater than in the respective complete assignment. Thus, fewer I...
Decision Analytic Networks in Artificial Intelligence
, 1995
"... Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a fa ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a family of graphical models of decision theory known as influence diagrams or as belief networks. These models are equally attractive to theoreticians, decision modelers, and designers of knowledgebased systems. From a theoretical perspective, they combine graph theory, probability theory and decision theory. From an implementation perspective, they lead to powerful automated systems. Although many practicing decision analysts have already adopted influence diagrams as modeling and structuring tools, they may remain unaware of the theoretical work that has emerged from the artificial intelligence community. This paper surveys the first decade or so of this work. Investment Technology Group, ...
Stochastic Sampling and Search in Belief Updating Algorithms for . . .
 IN WORKING NOTES OF THE AAAI SPRING SYMPOSIUM ON SEARCH TECHNIQUES FOR PROBLEM SOLVING UNDER UNCERTAINTY AND INCOMPLETE INFORMATION
, 1999
"... Bayesian networks are gaining an increasing popularity as a modeling tool for complex problems involving reasoning under uncertainty. Since belief updating in very large Bayesian networks cannot be e#ectively addressed by exact methods, approximate inference schemes may be often the only comput ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Bayesian networks are gaining an increasing popularity as a modeling tool for complex problems involving reasoning under uncertainty. Since belief updating in very large Bayesian networks cannot be e#ectively addressed by exact methods, approximate inference schemes may be often the only computationally feasible alternative. There are two basic classes of approximate schemes: stochastic sampling and searchbased algorithms. We summarize
Decision analysis techniques for knowledge acquisition: Combining information and preference models using Aquinas
 Proceedings of the Second AAAI Knowledge Acquisition for KnowledgeBased Systems Workshop
, 1987
"... The field of decision analysis is concerned with the application of formal theories of probability and utility to the guidance of action. Decision analysis has been used for many years as a way to gain insight regarding decisions that involve significant amounts of uncertain information and complex ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
The field of decision analysis is concerned with the application of formal theories of probability and utility to the guidance of action. Decision analysis has been used for many years as a way to gain insight regarding decisions that involve significant amounts of uncertain information and complex preference issues, but it has been largely overlooked by knowledgebased system researchers. This paper illustrates the value of incorporating decision analysis insights and techniques into the knowledge acquisition and decision making process. This approach is being implemented within Aquinas, an automated knowledge acquisition and decision support tool based on personal construct theory that is under development at Boeing Computer Services. The need for explicit preference models in knowledgebased systems will be shown. The modeling of problems will be viewed from the perspectives of decision analysis and personal construct theory. We will outline the approach of Aquinas and then present an example that illustrates how preferences can be used to guide the knowledge acquisition process and the selection of alternatives in decision making. Techniques for combining supervised and unsupervised inductive learning from data with expert judgment, and integration of knowledge and inference methods at varying levels of precision will be presented. Personal construct theory and decision theory are shown to be complementary: the former provides a plausible account of the dynamics of model formulation and revision, while the latter provides a consistent framework for model evaluation. Applied personal construct theory (in the form of tools such as Aquinas) and applied decision theory (in the form of decision analysis) are moving along convergent paths. We see the approach in this paper as the first step toward a full integration of insights from the two disciplines and their respective repertory grid and influence diagram representations.
Efficient SearchBased Inference for NoisyOR Belief Networks: TopEpsilon
 In Proc. Twelfth Conf. on Uncertainty in Artificial Intelligence
, 1996
"... Inference algorithms for arbitrary belief networks are impractical for large, complex belief networks. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Inference algorithms for arbitrary belief networks are impractical for large, complex belief networks.
Framework for synthesizing semanticlevel indexes
 Multimedia Tools Appl
, 2003
"... Abstract. Extraction of the syntactic features is a welldefined problem thereby lending them to be exclusively employed in most of the contentbased retrieval systems. However, semanticlevel indices are more appealing to user as they are closer to the user’s personal space. Most of the work done a ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Extraction of the syntactic features is a welldefined problem thereby lending them to be exclusively employed in most of the contentbased retrieval systems. However, semanticlevel indices are more appealing to user as they are closer to the user’s personal space. Most of the work done at semantic level is confined to a limited domain as the features developed and employed therein apply satisfactorily only to that particular domain. Scaling up such systems would inevitably result in large numbers of features. Currently, there exists a lacuna in the availability of a framework that can effectively integrate these features and furnish semantic level indices. The objective of this paper is to highlight some of the issues in the design of such a framework and to report on the status of its development. In our framework, construction of a highlevel index is achieved through the synthesis of its large set of elemental features. From the large collection of these features, an image/video class is characterized by selecting automatically only a few principal features. By properly mapping the constrained multidimensional feature space constituted by these principal features, with the semantics of the data, it is feasible to construct high level indices. The problem remains, however, to automatically identify the principal or meaningful subset of features. This is done through the medium of Bayesian Network that discerns the data into cliques by training with preclassified data. The Bayesian Network associates each clique of data points in the multidimensional feature space to one of the classes during training that can later be used for evaluating the most probable class to
Bayesian Network Models for Generation of Crisis Management Training Scenarios
 In Proceedings of IAAI98
, 1998
"... We present a noisyOR Bayesian network model for simulationbased training, and an efficient searchbased algorithm for automatic synthesis of plausible training scenarios from constraint specifications. This randomized algorithm for approximate causal inference is shown to outperform other randomiz ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We present a noisyOR Bayesian network model for simulationbased training, and an efficient searchbased algorithm for automatic synthesis of plausible training scenarios from constraint specifications. This randomized algorithm for approximate causal inference is shown to outperform other randomized methods, such as those based on perturbation of the maximally plausible scenario. It has the added advantage of being able to generate acceptable scenarios (based on a maximum penalized likelihood criterion) faster than human subject matter experts, and with greater diversity than deterministic inference. We describe a fieldtested interactive training system for crisis management and show how our model can be applied offline to produce scenario specifications. We then evaluate the performance of our automatic scenario generator and compare its results to those achieved by human instructors, stochastic simulation, and maximum likelihood inference. Finally, we discuss the applicability of our system and framework to a broader range of modeling problems for computerassisted instruction.