Results 1  10
of
16
Axioms for probability and belieffunction propagation
 Uncertainty in Artificial Intelligence
, 1990
"... In this paper, we describe an abstract framework and axioms under which exact local computation of marginals is possible. The primitive objects of the framework are variables and valuations. The primitive operators of the framework are combination and marginalization. These operate on valuations. We ..."
Abstract

Cited by 137 (17 self)
 Add to MetaCart
In this paper, we describe an abstract framework and axioms under which exact local computation of marginals is possible. The primitive objects of the framework are variables and valuations. The primitive operators of the framework are combination and marginalization. These operate on valuations. We state three axioms for these operators and we derive the possibility of local computation from the axioms. Next, we describe a propagation scheme for computing marginals of a valuation when we have a factorization of the valuation on a hypertree. Finally we show how the problem of computing marginals of joint probability distributions and joint belief functions fits the general framework. 1.
Perspectives on the Theory and Practice of Belief Functions
 International Journal of Approximate Reasoning
, 1990
"... The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answer ..."
Abstract

Cited by 88 (7 self)
 Add to MetaCart
The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answers to that question. The theory of belief functions is more flexible; it allows us to derive degrees of belief for a question from probabilities for a related question. These degrees of belief may or may not have the mathematical properties of probabilities; how much they differ from probabilities will depend on how closely the two questions are related. Examples of what we would now call belieffunction reasoning can be found in the late seventeenth and early eighteenth centuries, well before Bayesian ideas were developed. In 1689, George Hooper gave rules for combining testimony that can be recognized as special cases of Dempster's rule for combining belief functions (Shafer 1986a). Similar rules were formulated by Jakob Bernoulli in his Ars Conjectandi, published posthumously in 1713, and by JohannHeinrich Lambert in his Neues Organon, published in 1764 (Shafer 1978). Examples of belieffunction reasoning can also be found in more recent work, by authors
A Comparison of LauritzenSpiegelhalter, Hugin, and ShenoyShafer Architectures for Computing Marginals of Probability Distributions
 Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence (UAI98
, 1998
"... In the last decade, several architectures have been proposed for exact computation of marginals using local computation. In this paper, we compare three architecturesLauritzenSpiegelhalter, Hugin, and ShenoyShaferfrom the perspective of graphical structure for message propagation, messagepa ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
In the last decade, several architectures have been proposed for exact computation of marginals using local computation. In this paper, we compare three architecturesLauritzenSpiegelhalter, Hugin, and ShenoyShaferfrom the perspective of graphical structure for message propagation, messagepassing scheme, computational efficiency, and storage efficiency. 1 INTRODUCTION In the last decade, several architectures have been proposed in the uncertain reasoning literature for exact computation of marginals of multivariate discrete probability distributions. One of the pioneering architectures for computing marginals was proposed by Pearl [1986]. Pearl's architecture applies to singly connected Bayes nets. For multiply connected Bayes nets, Pearl [1986] proposed the method of conditioning to reduce a multiply connected Bayes net to several singly connected Bayes nets. In 1988, Lauritzen and Spiegelhalter [1988] proposed an alternative architecture for computing marginals that applies...
Using DempsterShafer's BeliefFunction Theory in Expert Systems
 Advances in the DempsterShafer Theory of Evidence, 395–414
, 1994
"... The main objective of this paper is to describe how DempsterShafer’s (DS) theory of belief functions fits in the framework of valuationbased systems (VBS). Since VBS serve as a framework for managing uncertainty in expert systems, this facilitates the use of DS belieffunction theory in expert sys ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
The main objective of this paper is to describe how DempsterShafer’s (DS) theory of belief functions fits in the framework of valuationbased systems (VBS). Since VBS serve as a framework for managing uncertainty in expert systems, this facilitates the use of DS belieffunction theory in expert systems. Keywords: DempsterShafer’s theory of belief functions, valuationbased systems, expert systems 1.
Applications of belief functions in business decisions: A review
 Information Systems Frontiers
, 2003
"... In this paper, we review recent applications of DempsterShafer theory (DST) of belief functions to auditing and business decisionmaking. We show how DST can better map uncertainties in the application domains than Bayesian theory of probabilities. We review the applications in auditing around thre ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper, we review recent applications of DempsterShafer theory (DST) of belief functions to auditing and business decisionmaking. We show how DST can better map uncertainties in the application domains than Bayesian theory of probabilities. We review the applications in auditing around three practical problems that challenge the effective application of DST, namely, hierarchical evidence, versatile evidence, and statistical evidence. We review the applications in other business decisions in two loose categories: judgment under ambiguity and business model combination. Finally, we show how the theory of linear belief functions, a new extension of DST, can provide an alternative solution to a wide range of business problems. 1.
Axioms For Dynamic programming
 GAMMERMAN, A. (ED.), COMPUTATIONAL LEARNING AND PROBABILISTIC REASONING, 1996, 259275, JOHN WILEY SONS, LTD.
, 1996
"... This paper describes an abstract framework, called valuation network (VN), for representing and solving discrete optimization problems. In VNs, we represent information in an optimization problem using functions called valuations. Valuations represent factors of an objective function. Solving a VN i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper describes an abstract framework, called valuation network (VN), for representing and solving discrete optimization problems. In VNs, we represent information in an optimization problem using functions called valuations. Valuations represent factors of an objective function. Solving a VN involves using two operators called combination and marginalization. The combination operator tells us how to combine the factors of the objective function to form the global objective function (also called joint valuation). Marginalization is either maximization or minimization. Solving a VN can be described simply as finding the marginal of the joint valuation for the empty set. We state some simple axioms that combination and marginalization need to satisfy to enable us to solve a VN using local computation. We describe a fusion algorithm for solving a VN using local computation. For optimization problems, the fusion algorithm reduces to nonserial dynamic programming. Thus the fusion algorithm can be regarded as an abstract description of the dynamic programming method, and the axioms can be viewed as conditions that permit the use of dynamic programming.
Oleg Shcherbina
, 901
"... The use of discrete optimization (DO) models and algorithms makes it possible to solve many practical problems in scheduling theory, network optimization, routing in communication networks, facility location, optimization in enterprise resource planning, and logistics (in particular, in supply chain ..."
Abstract
 Add to MetaCart
The use of discrete optimization (DO) models and algorithms makes it possible to solve many practical problems in scheduling theory, network optimization, routing in communication networks, facility location, optimization in enterprise resource planning, and logistics (in particular, in supply chain
Why Should Statisticians Be Interested in Artificial Intelligence? 1
"... Statistics and artificial intelligence have much in common. Both disciplines are concerned with planning, with combining evidence, and with making decisions. Neither is an empirical science. Each aspires to be a general science of practical reasoning. Yet the two disciplines have kept each other at ..."
Abstract
 Add to MetaCart
Statistics and artificial intelligence have much in common. Both disciplines are concerned with planning, with combining evidence, and with making decisions. Neither is an empirical science. Each aspires to be a general science of practical reasoning. Yet the two disciplines have kept each other at arm's length. Sometimes they resemble competing religions. Each is quick to see the weaknesses in the other's practice and the absurdities in the other's dogmas. Each is slow to see that it can learn from the other. I believe that statistics and AI can and should learn from each other in spite of their differences. The real science of practical reasoning may lie in what is now the gap between them. I have discussed elsewhere how AI can learn from statistics (Shafer and Pearl, 1990). Here, since I am writing primarily for statisticians, I will emphasize how statistics can learn from AI. I will make my explanations sufficiently elementary, however, that they can be understood by readers who are not familiar with standard probability ideas, terminology, and notation. I begin by pointing out how other disciplines have learned from AI. Then I list some specific areas in which collaboration between statistics and AI may be fruitful. After these generalities, I turn to a topic of particular interest to me—what we can learn from AI about the meaning and limits of probability. I examine the probabilistic approach to combining evidence in expert systems, and I ask how this approach can be generalized to situations where we need to combine evidence but where the thoroughgoing use of numerical probabilities is impossible or inappropriate. I conclude that the most essential feature of probability in expert systems—the feature we should try to generalize—is factorization, not conditional independence. I show how factorization generalizes from probability to numerous other calculi for expert systems, and I discuss the implications of this for the philosophy of subjective probability judgment. This is a lot of territory. The following analytical table of contents may help keep it in perspective.