Results 1  10
of
10
The Posterior Probability of Bayes Nets with Strong Dependences
 Soft Computing
, 1999
"... Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong substantial dependence. Good models map significant deviance from independence and neglect approximate independence or dependence weaker than a noise threshold. This intuition is applied to learning the structure of Bayes nets from data. We determine the conditional posterior probabilities of structures given that the degree of dependence at each of their nodes exceeds a critical noise level. Deviance from independence is measured by mutual information. Arc probabilities are determined by the amount of mutual information the neighbors contribute to a node, is greater than a critical minimum deviance from independence. A Ø 2 approximation for the probability density function of mutual info...
Formalising a software safety case via belief networks
, 1995
"... The SHIP project (ref. EV5V 103) is being carried out with financial support from the EEC in the ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
The SHIP project (ref. EV5V 103) is being carried out with financial support from the EEC in the
Formalising Engineering Judgement on Software Dependability via Belief Networks
 Proc. DCCA'97 (Sixth IFIP International Working Conference on Dependable Computing for Critical Applications) GarmischPartenkirchen
, 1997
"... We present the use of Bayesian belief networks to formalise reasoning about software dependability, so as to make assessments easier to build and to check. Bayesian belief networks include a graphical representation of the structure of a complex argument, and a sound calculus for representing pr ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We present the use of Bayesian belief networks to formalise reasoning about software dependability, so as to make assessments easier to build and to check. Bayesian belief networks include a graphical representation of the structure of a complex argument, and a sound calculus for representing probabilistic information and updating it with new observations. We illustrate the method and show its feasibility via a simple example, developed via a commercial computer tool, representing a form of argument which is often used in claims for high dependability. This example is not meant to be "typical", since a sound and complete argument can only be built using the knowledge available in the specific case of interest. This example, although simple, demonstrates the advantages of using belief networks for sounder assessment of reliability and safety. 1. Introduction The probabilistic assessment of the dependability of software products is a formidable task, for which no proven me...
Do Subjects Understand Base Rates?
 Organisational Behaviour and Human Decision Processes
, 1997
"... Investigations of the degree to which people neglect or use base rates typically require subjects to make a judgment based on presumptive integrations of base rates and likelihood ratios. The present paper deals with a logically prior issue, whether people understand what data are needed to constitu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Investigations of the degree to which people neglect or use base rates typically require subjects to make a judgment based on presumptive integrations of base rates and likelihood ratios. The present paper deals with a logically prior issue, whether people understand what data are needed to constitute a proper base rate. The method, which we will call the Partial Information Paradigm, has subjects select data relevant to, for example, diagnosis of a disease, D, based on a symptom, S. The question is whether subjects select those frequencies of cases for which information about the presence or absence of D is available, but for which information about the presence or absence of S is not. Only the former frequencies are relevant to the estimation of the base rate of D, hence to the probability of D given S. Six experiments are reported. Four experiments ask subjects to select those frequencies relevant to diagnosis, one of which also had subjects select frequencies relevant to prediction...
Extracting and Propagating Qualitative Correlations as Confirmatory or Disconfirmatory Evidence
"... We present a new method for extracting, representing and propagating qualitative correlations among hypotheses as confirmatory or disconfirmatory evidence of uncertain reasoning. The advantages of the method include: (1) it can be applied to the problems where evidence is not explicitly or completel ..."
Abstract
 Add to MetaCart
We present a new method for extracting, representing and propagating qualitative correlations among hypotheses as confirmatory or disconfirmatory evidence of uncertain reasoning. The advantages of the method include: (1) it can be applied to the problems where evidence is not explicitly or completely given; (2) few numbers and assumptions need to be provided by domain experts in advance; and consequently, (3) the knowledge acquisition is simple, and the inconsistency in knowledge bases can be avoided. We put forward two new concepts called qualitative correlations among hypotheses and qualitative correlation propagation. We propose an algorithm for extracting and representing qualitative correlations among hypotheses and an algorithm for propagating qualitative correlations and updating possibilities of hypotheses respectively. We have applied the method to a practical system for infrared spectrum interpretation. The experimental results show that the proposed method is significantly b...
Distributional Smoothing in Bayesian Fault Diagnosis
"... Abstract — Previously, we demonstrated the potential value of constructing assetspecific models for fault diagnosis. We also examined the effects of using split probabilities where prior probabilities come from assetspecific statistics and likelihoods from fleetwide statistics. In this paper, we ..."
Abstract
 Add to MetaCart
Abstract — Previously, we demonstrated the potential value of constructing assetspecific models for fault diagnosis. We also examined the effects of using split probabilities where prior probabilities come from assetspecific statistics and likelihoods from fleetwide statistics. In this paper, we build upon that work to examine the efficacy of smoothing probability distributions between assetspecific and fleetwide distributions to improve diagnostic accuracy further. In the current experiments, we also add environmental differentiation to asset differentiation under an assumption that data is acquired in the context of online health monitoring. We hypothesize that overall diagnostic accuracy will be increased with the smoothing approach relative to a fleetwide model or a set of assetspecific models. The hypothesis is largely supported by the results. Future work will concentrate on improving the smoothing mechanism and in the context of small data sets. Index Terms—Diagnosis (fault), machine learning, Bayesian classifier, smoothing.
REPRESENTATION TECHNIQUES FOR DISTRIBUTED KNOWLEDGE MODELS  Knowledge fusion with aggregation and sampling
"... We address the point of introducing fusion and representation techniques for distributed knowledge sources often available as graphical structures. The distributed artificial intelligence field established new aspects for distributed problem solving and planning or learning in distributed systems wi ..."
Abstract
 Add to MetaCart
We address the point of introducing fusion and representation techniques for distributed knowledge sources often available as graphical structures. The distributed artificial intelligence field established new aspects for distributed problem solving and planning or learning in distributed systems with coordination. Knowledge fusion of distributed knowledge sources requires techniques to come to a global broad model including all knowledge network experts. A practical approach in this scenario is the aggregation of medical practitioners in hospitals. Based on probability foundations an aggregation and sampling fusion technique is explained.
Do Subjects Discard Relevant Data? A Critical Test of Base Rate Neglect
"... One of the most widely accepted findings of the heuristicsandbiases program is that people making probabilistic inferences are insufficiently sensitive to base rates. Recently, though, the proposition of base rate neglect has been questioned on empirical, methodological, and normative grounds. ..."
Abstract
 Add to MetaCart
One of the most widely accepted findings of the heuristicsandbiases program is that people making probabilistic inferences are insufficiently sensitive to base rates. Recently, though, the proposition of base rate neglect has been questioned on empirical, methodological, and normative grounds. The present paper introduces a critical test of the hypothesis of base rate neglect. This method, which we will call the Partial Information Paradigm, has subjects select data relevant to, for example, diagnosis of a disease, D, based on a symptom, S. The question is whether subjects select those frequencies of cases for which information about the presence or absence of D is available, but for which information about the presence or absence of S is not. Such frequencies are relevant to the estimation of the base rate of D and to the probability of D given S. Four experiments ask subjects to select those frequencies relevant to diagnosis, one of which also had subjects select frequencies relevant to prediction of S from D. A fifth was concerned with inference of correlation. Very few subjects selected only the normatively correct information. Experiment 6 simplified
unknown title
"... e.g., results of statistical testing, and are often insufficient for certifying the required levels of reliability or safety [12, 19]. The assessor (developer, independent assessor or licensing authority) is confronted with a wealth of evidence about the design methods used, the quality assurance or ..."
Abstract
 Add to MetaCart
e.g., results of statistical testing, and are often insufficient for certifying the required levels of reliability or safety [12, 19]. The assessor (developer, independent assessor or licensing authority) is confronted with a wealth of evidence about the design methods used, the quality assurance organisation, the results of testing, etc., none of which is usually sufficient to prove the desired conclusion, e.g., that a system has a certain small probability of dangerous failure. In these conditions, the assessor uses "engineering judgement " to integrate all evidence into a statement that the software is safe (or reliable) enough. This step of integrating heterogeneous evidence into a single, probabilistic statement is an essential, unavoidable phase in the assessment. No improvement in software engineering (e.g., wider use of formal methods) will eliminate its necessity (for a more complete argument, see [12]). In this judgement, experts rely on their previous experience as well as on the evidence about the individual project they are assessing. However, they cannot
Competing Fusion for Bayesian Applications
"... In this paper we address and discuss the problem of learning graphical models like Bayesian networks using structure learning algorithms. We present a new parameterized structure learning approach. A competing fusion mechanism to aggregate expert knowledge stored in distributed knowledge bases or pr ..."
Abstract
 Add to MetaCart
In this paper we address and discuss the problem of learning graphical models like Bayesian networks using structure learning algorithms. We present a new parameterized structure learning approach. A competing fusion mechanism to aggregate expert knowledge stored in distributed knowledge bases or probability distributions is also described. Experimental results of a medical case study show that our approach can improve the quality of the learned graphical model.