Results 1  10
of
51
Control of Selective Perception Using Bayes Nets and Decision Theory
, 1993
"... A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the neces ..."
Abstract

Cited by 100 (1 self)
 Add to MetaCart
A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decisionmaking are central issues for selective vision, which takes advantage of prior knowledge of a domain's abstract and geometrical structure and models for the expected performance and cost of visual operators. The TEA1 selective vision system uses Bayes nets for representation and benefitcost analysis for control of visual and nonvisual actions. It is the highlevel control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA1 demonstrates that Bayes nets and decision theoretic techniques provide a general, reusable framework for constructi...
Perspectives on the Theory and Practice of Belief Functions
 International Journal of Approximate Reasoning
, 1990
"... The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answer ..."
Abstract

Cited by 86 (7 self)
 Add to MetaCart
The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answers to that question. The theory of belief functions is more flexible; it allows us to derive degrees of belief for a question from probabilities for a related question. These degrees of belief may or may not have the mathematical properties of probabilities; how much they differ from probabilities will depend on how closely the two questions are related. Examples of what we would now call belieffunction reasoning can be found in the late seventeenth and early eighteenth centuries, well before Bayesian ideas were developed. In 1689, George Hooper gave rules for combining testimony that can be recognized as special cases of Dempster's rule for combining belief functions (Shafer 1986a). Similar rules were formulated by Jakob Bernoulli in his Ars Conjectandi, published posthumously in 1713, and by JohannHeinrich Lambert in his Neues Organon, published in 1764 (Shafer 1978). Examples of belieffunction reasoning can also be found in more recent work, by authors
Two views of belief: Belief as generalized probability and belief as evidence
, 1992
"... : Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized prob ..."
Abstract

Cited by 72 (12 self)
 Add to MetaCart
: Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized probability function (which technically corresponds to the inner measure induced by a probability function). The second is as a way of representing evidence. Evidence, in turn, can be understood as a mapping from probability functions to probability functions. It makes sense to think of updating a belief if we think of it as a generalized probability. On the other hand, it makes sense to combine two beliefs (using, say, Dempster's rule of combination) only if we think of the belief functions as representing evidence. Many previous papers have pointed out problems with the belief function approach; the claim of this paper is that these problems can be explained as a consequence of confounding the...
Knowledge, probability, and adversaries
 Journal of the ACM
, 1993
"... Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice ..."
Abstract

Cited by 72 (24 self)
 Add to MetaCart
Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice can be understood in terms of a betting game. This betting game itself can be understood in terms of three types of adversaries in uencing three di erent aspects of the game. The rst selects the outcome of all nondeterministic choices in the system� the second represents the knowledge of the agent's opponent in the betting game (this is the key place the papers mentioned above di er) � the third is needed in asynchronous systems to choose the time the bet is placed. We illustrate the need for considering all three types of adversaries with a number of examples. Given a class of adversaries, we show howto assign probability spaces to agents in a way most appropriate for that class, where \most appropriate " is made precise in terms of this betting game. We conclude by showing how di erent assignments of probability spaces (corresponding to di erent opponents) yield di erent levels of guarantees in probabilistic coordinated attack.
Causal Inference from Graphical Models
, 2001
"... Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling complex stochastic systems. It has become clear that graphical models, in particular those based upon directed acyclic graphs, have natural causal interpretations and thus form a base for a language in which causal concepts can be discussed and analysed in precise terms. As a consequence there has been an explosion of writings, not primarily within mainstream statistical literature, concerned with the exploitation of this language to clarify and extend causal concepts. Among these we mention in particular books by Spirtes, Glymour and Scheines (1993), Shafer (1996), and Pearl (2000) as well as the collection of papers in Glymour and Cooper (1999). Very briefly, but fundamentally,
Updating Probabilities
, 2002
"... As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in t ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
A New Approach to Updating Beliefs
 Uncertainty in Artificial Intelligence
, 1991
"... : We define a new notion of conditional belief, which plays the same role for DempsterShafer belief functions as conditional probability does for probability functions. Our definition is different from the standard definition given by Dempster, and avoids many of the wellknown problems of that def ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
: We define a new notion of conditional belief, which plays the same role for DempsterShafer belief functions as conditional probability does for probability functions. Our definition is different from the standard definition given by Dempster, and avoids many of the wellknown problems of that definition. Just as the conditional probability P r(\DeltajB) is a probability function which is the result of conditioning on B being true, so too our conditional belief function Bel(\DeltajB) is a belief function which is the result of conditioning on B being true. We define the conditional belief as the lower envelope (that is, the inf) of a family of conditional probability functions, and provide a closedform expression for it. An alternate way of understanding our definition of conditional belief is provided by considering ideas from an earlier paper [FH91], where we connect belief functions with inner measures. In particular, we show here how to extend the definition of conditional pro...
Uncertainty, Belief, and Probability
 Computational Intelligence
, 1989
"... : We introduce a new probabilistic approach to dealing with uncertainty, based on the observation that probability theory does not require that every event be assigned a probability. For a nonmeasurable event (one to which we do not assign a probability), we can talk about only the inner measure and ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
: We introduce a new probabilistic approach to dealing with uncertainty, based on the observation that probability theory does not require that every event be assigned a probability. For a nonmeasurable event (one to which we do not assign a probability), we can talk about only the inner measure and outer measure of the event. In addition to removing the requirement that every event be assigned a probability, our approach circumvents other criticisms of probabilitybased approaches to uncertainty. For example, the measure of belief in an event turns out to be represented by an interval (defined by the inner and outer measure), rather than by a single number. Further, this approach allows us to assign a belief (inner measure) to an event E without committing to a belief about its negation :E (since the inner measure of an event plus the inner measure of its negation is not necessarily one). Interestingly enough, inner measures induced by probability measures turn out to correspond in a ...
Languages and Designs for Probability Judgment
, 1985
"... Theories of subjective probobility ore viewed OS formal languages for onolyzing evidence ond expressing degrees of belief. This article focuses on two probobility Iongouges, the Boyesion longuoge ond the longuoge of belief functions (Shofer, 1976). We describe and compare the semantics (i.e., the me ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
Theories of subjective probobility ore viewed OS formal languages for onolyzing evidence ond expressing degrees of belief. This article focuses on two probobility Iongouges, the Boyesion longuoge ond the longuoge of belief functions (Shofer, 1976). We describe and compare the semantics (i.e., the meoning of the scale) ond the syntax (i.e., the formol coIcuIus) of these Ionguoges. We also investigote some of the designs for probobility judgment afforded by the two languages.
Updating Beliefs with Incomplete Observations
"... Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued). This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that co ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued). This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this paper we propose a new method for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no assumptions about the socalled incompleteness mechanism that associates complete with incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we use only coherence arguments to turn prior into posterior (updated) probabilities. In general, this new approach to updating produces lower and upper posterior probabilities and previsions (expectations), as well as partially determinate decisions. This is a logical consequence of the existing ignorance about the incompleteness mechanism. As an example, we use the new updating method to properly address the apparent paradox in the `Monty Hall' puzzle. More importantly, we apply it to the problem of classification of new evidence in probabilistic expert systems, where it leads to a new, socalled conservative updating rule.