Results 1  10
of
190
Perspectives on the Theory and Practice of Belief Functions
 International Journal of Approximate Reasoning
, 1990
"... The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answer ..."
Abstract

Cited by 86 (7 self)
 Add to MetaCart
The theory of belief functions provides one way to use mathematical probability in subjective judgment. It is a generalization of the Bayesian theory of subjective probability. When we use the Bayesian theory to quantify judgments about a question, we must assign probabilities to the possible answers to that question. The theory of belief functions is more flexible; it allows us to derive degrees of belief for a question from probabilities for a related question. These degrees of belief may or may not have the mathematical properties of probabilities; how much they differ from probabilities will depend on how closely the two questions are related. Examples of what we would now call belieffunction reasoning can be found in the late seventeenth and early eighteenth centuries, well before Bayesian ideas were developed. In 1689, George Hooper gave rules for combining testimony that can be recognized as special cases of Dempster's rule for combining belief functions (Shafer 1986a). Similar rules were formulated by Jakob Bernoulli in his Ars Conjectandi, published posthumously in 1713, and by JohannHeinrich Lambert in his Neues Organon, published in 1764 (Shafer 1978). Examples of belieffunction reasoning can also be found in more recent work, by authors
Process And Policy: ResourceBounded NonDemonstrative Reasoning
, 1993
"... This paper investigates the appropriateness of formal dialectics as a basis for nonmonotonic reasoning and defeasible reasoning that takes computational limits seriously. Rules that can come into conflict should be regarded as policies, which are inputs to deliberative processes. Dialectical protoc ..."
Abstract

Cited by 83 (4 self)
 Add to MetaCart
This paper investigates the appropriateness of formal dialectics as a basis for nonmonotonic reasoning and defeasible reasoning that takes computational limits seriously. Rules that can come into conflict should be regarded as policies, which are inputs to deliberative processes. Dialectical protocols are appropriate for such deliberations when resources are bounded and search is serial. AI, it is claimed here, is now perfectly positioned to correct many misconceptions about reasoning that have resulted from mathematical logic's enormous success in this century: among them, (1) that all reasons are demonstrative, (2) that rational belief is constrained, not constructed, (3) that process and disputation are not essential to reasoning. AI mainly provides new impetus to formalize the alternative (but older) conception of reasoning, and AI provides mechanisms with which to create compelling formalism that describes the control of processes. The technical contributions here are: the partial justification of dialectic based on controlling search; the observation that nonmonotonic reasoning can be subsumed under certain kinds of dialectics; the portrayal of inference in knowledge bases as policy reasoning; the review of logics of dialogue and proposed extensions; and the preformal and initial formal discussion of aspects and variations of dialectical systems with nondemonstrative reasons. 1. ARGUMENTS AND DEMONSTRATION
The Dynamics of Belief Systems: Foundations vs. Coherence Theories
, 1990
"... this article I want to discuss some philosophical problems one encounters when trying to model the dynamics of epistemic states. Apart from being of interest in themselves, I believe that solutions to these problems will be crucial for any attempt to use computers to handle changes of knowledge syst ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
this article I want to discuss some philosophical problems one encounters when trying to model the dynamics of epistemic states. Apart from being of interest in themselves, I believe that solutions to these problems will be crucial for any attempt to use computers to handle changes of knowledge systems. Problems concerning knowledge representation and the updating of such representations have become the focus of much recent research in artificial intelligence (AI)
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all w ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
From Statistics to Beliefs
, 1992
"... An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief to all basic "situations " consistent with the knowledge base. They differ because there are competing intuitions about what the basic situations are. Various natural patterns of reasoning, such as the preference for the most specific statistical data available, turn out to follow from some or all of the techniques. This is an improvement over earlier theories, such as work on direct inference and reference classes, which arbitrarily postulate these patterns without offering any deeper explanations or guarantees of consistency. The three methods we investigate have surprising characterizations: there are connections to the principle of maximum entropy, a principle of maximal independence, an...
Towards a unified theory of imprecise probability
 Int. J. Approx. Reasoning
, 2000
"... Belief functions, possibility measures and Choquet capacities of order 2, which are special kinds of coherent upper or lower probability, are amongst the most popular mathematical models for uncertainty and partial ignorance. I give examples to show that these models are not sufficiently general to ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
Belief functions, possibility measures and Choquet capacities of order 2, which are special kinds of coherent upper or lower probability, are amongst the most popular mathematical models for uncertainty and partial ignorance. I give examples to show that these models are not sufficiently general to represent some common types of uncertainty. Coherent lower previsions and sets of probability measures are considerably more general but they may not be sufficiently informative for some purposes. I discuss two other models for uncertainty, involving sets of desirable gambles and partial preference orderings. These are more informative and more general than the previous models, and they may provide a suitable mathematical setting for a unified theory of imprecise probability.
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 35 (20 self)
 Add to MetaCart
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
Updating Beliefs with Incomplete Observations
"... Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued). This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that co ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued). This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this paper we propose a new method for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no assumptions about the socalled incompleteness mechanism that associates complete with incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we use only coherence arguments to turn prior into posterior (updated) probabilities. In general, this new approach to updating produces lower and upper posterior probabilities and previsions (expectations), as well as partially determinate decisions. This is a logical consequence of the existing ignorance about the incompleteness mechanism. As an example, we use the new updating method to properly address the apparent paradox in the `Monty Hall' puzzle. More importantly, we apply it to the problem of classification of new evidence in probabilistic expert systems, where it leads to a new, socalled conservative updating rule.
The inferential complexity of Bayesian and credal networks
 In Proceedings of the International Joint Conference on Artificial Intelligence
, 2005
"... This paper presents new results on the complexity of graphtheoretical models that represent probabilities (Bayesian networks) and that represent interval and set valued probabilities (credal networks). We define a new class of networks with bounded width, and introduce a new decision problem for Ba ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
This paper presents new results on the complexity of graphtheoretical models that represent probabilities (Bayesian networks) and that represent interval and set valued probabilities (credal networks). We define a new class of networks with bounded width, and introduce a new decision problem for Bayesian networks, the maximin a posteriori. We present new links between the Bayesian and credal networks, and present new results both for Bayesian networks (most probable explanation with observations, maximin a posteriori) and for credal networks (bounds on probabilities a posteriori, most probable explanation with and without observations, maximum a posteriori). 1
Graphoid properties of epistemic irrelevance and independence
, 2005
"... This paper investigates Walley’s concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can v ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
This paper investigates Walley’s concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can violate the symmetry, contraction and intersection axioms, that epistemic independence can violate contraction and intersection, and that this accords with informal notions of irrelevance and independence.