Results 1  10
of
36
The Transferable Belief Model
 ARTIFICIAL INTELLIGENCE
, 1994
"... We describe the transferable belief model, a model for representing quantified beliefs based on belief functions. Beliefs can be held at two levels: (1) a credal level where beliefs are entertained and quantified by belief functions, (2) a pignistic level where beliefs can be used to make decisions ..."
Abstract

Cited by 371 (13 self)
 Add to MetaCart
We describe the transferable belief model, a model for representing quantified beliefs based on belief functions. Beliefs can be held at two levels: (1) a credal level where beliefs are entertained and quantified by belief functions, (2) a pignistic level where beliefs can be used to make decisions and are quantified by probability functions. The relation between the belief function and the probability function when decisions must be made is derived and justified. Four paradigms are analyzed in order to compare Bayesian, upper and lower probability, and the transferable belief approaches.
Axioms for probability and belieffunction propagation
 Uncertainty in Artificial Intelligence
, 1990
"... In this paper, we describe an abstract framework and axioms under which exact local computation of marginals is possible. The primitive objects of the framework are variables and valuations. The primitive operators of the framework are combination and marginalization. These operate on valuations. We ..."
Abstract

Cited by 137 (17 self)
 Add to MetaCart
In this paper, we describe an abstract framework and axioms under which exact local computation of marginals is possible. The primitive objects of the framework are variables and valuations. The primitive operators of the framework are combination and marginalization. These operate on valuations. We state three axioms for these operators and we derive the possibility of local computation from the axioms. Next, we describe a propagation scheme for computing marginals of a valuation when we have a factorization of the valuation on a hypertree. Finally we show how the problem of computing marginals of joint probability distributions and joint belief functions fits the general framework. 1.
Belief Functions: The Disjunctive Rule of Combination and the Generalized Bayesian Theorem
"... We generalize the Bayes ’ theorem within the transferable belief model framework. The Generalized Bayesian Theorem (GBT) allows us to compute the belief over a space Θ givenanobservationx⊆Xwhen one knows only the beliefs over X for every θi ∈ Θ. We also discuss the Disjunctive Rule of Combination ( ..."
Abstract

Cited by 121 (6 self)
 Add to MetaCart
We generalize the Bayes ’ theorem within the transferable belief model framework. The Generalized Bayesian Theorem (GBT) allows us to compute the belief over a space Θ givenanobservationx⊆Xwhen one knows only the beliefs over X for every θi ∈ Θ. We also discuss the Disjunctive Rule of Combination (DRC) for distinct pieces of evidence. This rule allows us to compute the belief over X from the beliefs induced by two distinct pieces of evidence when one knows only that one of the pieces of evidence holds. The properties of the DRC and GBT and their uses for belief propagation in directed belief networks are analysed. The use of the discounting factors is justfied. The application of these rules is illustrated by an example of medical diagnosis.
Causality: Models
 Reasoning, and Inference
, 2000
"... This paper explores the role of Directed Acyclic Graphs (DAGs) as a representation of conditional independence relationships. We show that DAGs offer polynomially sound and complete inference mechanisms for inferring conditional independence relationships from a given causal set of such relationship ..."
Abstract

Cited by 103 (15 self)
 Add to MetaCart
This paper explores the role of Directed Acyclic Graphs (DAGs) as a representation of conditional independence relationships. We show that DAGs offer polynomially sound and complete inference mechanisms for inferring conditional independence relationships from a given causal set of such relationships. As a consequence, dseparation, a graphical criterion for identifying independencies in a DAG, is shown to uncover more valid independencies then any other criterion. In addition, we employ the Armstrong property of conditional independence to show that the dependence relationships displayed by a DAG are inherently consistent, i.e. for every DAG D there exists some probability distribution P that embodies all the conditional independencies displayed in D and none other. INTRODUCTION AND SUMMARY OF RESULTS Networks employing Directed Acyclic Graphs (DAGs) have a long and rich tradition, starting with the geneticist Wright (1921). He developed a method called path analysis [Wright, 1934] which later on, became an established representation of causal models in economics [Wold, 1964], sociology [Blalock, 1971] and psychology [Duncan, 1975]. Influence diagrams represent another application of
Logical and algorithmic properties of conditional independence and graphical models. The Annals of Statistics 21
, 1993
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JS ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
PULCINELLA  A General Tool for Propagating Uncertainty in Valuation Networks
 PROC. 7TH CONF. ON UNCERTAINTY IN AI, 323331
, 1991
"... We present PULCinella and its use in comparing uncertainty theories. PULCinella is a general tool for Propagating Uncertainty based on the Local Computation technique of Shafer and Shenoy. It may be specialized to different uncertainty theories: at the moment, Pulcinella can propagate probabilities, ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
We present PULCinella and its use in comparing uncertainty theories. PULCinella is a general tool for Propagating Uncertainty based on the Local Computation technique of Shafer and Shenoy. It may be specialized to different uncertainty theories: at the moment, Pulcinella can propagate probabilities, belief functions, Boolean values, and possibilities. Moreover, Pulcinella allows the user to easily define his own specializations. To illustrate Pulcinella, we analyze two examples by using each of the four theories above. In the first one, we mainly focus on intrinsic differences between theories. In the second one, we take a knowledge engineer viewpoint, and check the adequacy of each theory to a given problem.
Fusion and Propagation in Graphical Belief Models
, 1988
"... Graphical models give a clear and concise way of describing dependencies among many variables. Only relationships among variables which all share a common hyperedge must be modeled, considerably simplifying both the modeling and the computational task. Graphical models have been studied by Pearl [19 ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Graphical models give a clear and concise way of describing dependencies among many variables. Only relationships among variables which all share a common hyperedge must be modeled, considerably simplifying both the modeling and the computational task. Graphical models have been studied by Pearl [1986a,1986b], Moussouris[1974], and Lauritzen and Spiegelhalter[1988] in the Bayesian case, and Kong[1986a], Shafer, Shenoy, and Mellouli [1986] and Shenoy and Shafer[1986] in the Belief Function case. Belief functions are a generalization of probability that allow ways to express total ignorance, Bayesian prior probability distributions, conditional probability distributions (likelihoods), logical relationships (production rules) and observations. All these diverse types of knowledge can be combined with a uniform fusion rule, the direct sum operator. Belief functions can be simply restricted to a smaller frame and easily extended to a larger frame without adding additional information. The theory of belief functions is developed in Shafer[1976,1982] and Kong[1986a]. By a simple procedure given here and in Kong[1986b],we can transform the model hypergraph into a tree of closures. This is a tree of “chunks ” of the original problem, each “chunk ” can be computed independently of all other chunks except its neighbors. Each node in the tree of closures passes messages (expressed as belief functions) to its neighbors consisting of the local information fused with all the information that has propagated through the other branches of the tree. Using this propagation algorithm along with the fusion algorithm given by the direct sum operator, we can easily compute marginal beliefs, and can save considerable computational cost over the brute force approach.
Optimality issues in constructing a Markov tree from graphical models
 Computational and Graphical Statistics
, 1991
"... Several recent papers have described probability models which used graph and hypergraphs to represent relationships among the variables. Two related computing algorithms are commonly used to manipulate such models: the peeling algorithm which eliminates variables one at a time to find the marginal d ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Several recent papers have described probability models which used graph and hypergraphs to represent relationships among the variables. Two related computing algorithms are commonly used to manipulate such models: the peeling algorithm which eliminates variables one at a time to find the marginal distribution of a single variable, and the fusion and propagation algorithm which simultaneously solves for many marginal distributions by passing messages in a Tree of Cliques whose nodes correspond to subsets of variables. The peeling algorithm requires an elimination order. As demonstrated in this paper, the elimination order can in turn be used to construct a Tree of Cliques for propagation and fusion. This paper addresses three computational issues: 1) The choice of elimination order determines the size of the largest node of the Tree of Cliques, which dominates the computational cost for the probability model using either peeling or fusion and propagation. We review heuristics for choosing an elimination order. (2) Inserting intersection nodes into the tree of cliques produces a junction tree which has a lower computational cost. We present an algorithm which produces a junction trees with a high computational efficiency. (3) Augmenting the tree of cliques with additional nodes can lead to a new tree structure which more clearly expresses the relationship between the original graphical model and the tree model.
Dempster's rule for evidence ordered in a complete directed acyclic graph
 International Journal of Approximate Reasoning
, 1993
"... For the case of evidence ordered in a complete directed acyclic graph this paper presents a new algorithm with lower computational complexity for Dempster's rule than that of stepbystep application of Dempster's rule. In this problem, every original pair of evidences, has a corresponding evidence ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
For the case of evidence ordered in a complete directed acyclic graph this paper presents a new algorithm with lower computational complexity for Dempster's rule than that of stepbystep application of Dempster's rule. In this problem, every original pair of evidences, has a corresponding evidence against the simultaneous belief in both propositions. In this case, it is uncertain whether the propositions of any two evidences are in logical conflict. The original evidences are associated with the vertices and the additional evidences are associated with the edges. The original evidences are ordered, i.e., for every pair of evidences it is determinable which of the two evidences is the earlier one. We are interested in finding the most probable completely specified path through the graph, where transitions are possible only from lower to higherranked vertices. The path is here a representation for a sequence of states, for instance a sequence of snapshots of a physical object's track. A completely specified path means that the path includes no other vertices than those stated in the path representation, as opposed to an incompletely specified path that may also include other vertices than those stated. In a hierarchical network of all subsets of the frame, i.e., of all incompletely specified paths, the original and additional evidences support subsets that are not disjoint, thus it is not possible to prune the network to a tree. Instead of propagating belief, the new algorithm reasons about the logical conditions of a completely specified path through the graph. The new algorithm is O(Θ log Θ), compared to O(Θ^log Θ) of the classic brute force algorithm. After a detailed presentation of the reasoning behind the new algorithm we conclude that it is feasible to reason without approximation about completely specified paths through a complete directed acyclic graph.
Theory of evidence  a survey of its mathematical foundations, applications and computational aspects
 ZOR MATHEMATICAL METHODS OF OPERATIONS RESEARCH
, 1994
"... The mathematical theory of evidence has been introduced by Glenn Shafer in 1976 as a new approach to the representation of uncertainty. This theory can be represented under several distinct but more or less equivalent forms. Probabilistic interpretations of evidence theory have their roots in Arthur ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The mathematical theory of evidence has been introduced by Glenn Shafer in 1976 as a new approach to the representation of uncertainty. This theory can be represented under several distinct but more or less equivalent forms. Probabilistic interpretations of evidence theory have their roots in Arthur Dempster's multivalued mappings of probability spaces. This leads to random set and more generally to random lter models of evidence. In this probabilistic view evidence is seen as more or less probable arguments for certain hypotheses and they can be used to support those hypotheses to certain degrees. These degrees of support are in fact the reliabilities with which the hypotheses can be derived from the evidence. Alternatively, the mathematical theory of evidence can be founded axiomatically on the notion of belief functions or on the allocation of belief masses to subsets of a frame of discernment. These approaches aim to present evidence theory as an extension of probability theory. Evidence theory has been used to represent uncertainty in expert systems, especially in the domain of diagnostics. It can be applied to decision analysis and it gives a new perspective for statistical analysis. Among its further applications are image processing, project planing and scheduling and risk analysis. The computational problems of evidence theory