Results 1  10
of
12
Updating Probabilities
, 2002
"... As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening a ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
Iterated revision as prioritized merging
 In Proc. of KR’06
, 2006
"... Standard accounts of iterated belief revision assume a static world, about which an agent receives a sequence of observations. More recent items are assumed to have priority over less recent items. We argue that there is no reason, given a static world, for giving priority to more recent items. Inst ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
Standard accounts of iterated belief revision assume a static world, about which an agent receives a sequence of observations. More recent items are assumed to have priority over less recent items. We argue that there is no reason, given a static world, for giving priority to more recent items. Instead we suggest that a sequence of observations should be merged with the agent’s beliefs. Since observations may have differing reliability, arguably the appropriate belief change operator is prioritized merging. We develop this view here, suggesting postulates for prioritized merging, and examining existing merging operators with respect to these postulates. As well, we examine other suggested postulates for iterated revision, to determine how well they fit with the prioritized merging interpretation. All postulates for iterated revision that we examine, except for Darwiche and Pearl’s controversial C2, are consequences of our suggested postulates for prioritized merging.
Can the Maximum Entropy Principle Be Explained as a Consistency Requirement?
, 1997
"... The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathe ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathematical formulation and in intended scope, into the principle of maximum relative entropy or of minimum information. It has been claimed that these principles are singled out as unique methods of statistical inference that agree with certain compelling consistency requirements. This paper reviews these consistency arguments and the surrounding controversy. It is shown that the uniqueness proofs are flawed, or rest on unreasonably strong assumptions. A more general class of 1 inference rules, maximizing the socalled R'enyi entropies, is exhibited which also fulfill the reasonable part of the consistency assumptions. 1 Introduction In any application of probability theory to the pro...
Probability Update: Conditioning vs. CrossEntropy
 In Proc. Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI
, 1997
"... Conditioning is the generally agreedupon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of crossentropy minimization, to handle updates that involve uncertain information. In thi ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Conditioning is the generally agreedupon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of crossentropy minimization, to handle updates that involve uncertain information. In this paper we reexamine such a case: van Fraassen's Judy Benjamin problem [1987], which in essence asks how one might update given the value of a conditional probability. We argue thatcontrary to the suggestions in the literatureit is possible to use simple conditionalization in this case, and thereby obtain answers that agree fully with intuition. This contrasts with proposals such as crossentropy, which are easier to apply but can give unsatisfactory answers. Based on the lessons from this example, we speculate on some general philosophical issues concerning probability update. 1 INTRODUCTION How should one update one's beliefs, represented as a probability distribution Pr over some ...
The Constraint Rule of the Maximum Entropy Principle
, 1995
"... The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distri ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule to equate the expectation values of certain functions with their empirical averages. There are, however, various other ways in which one can construct constraints from empirical data, which makes the maximum entropy principle lead to very different probability assignments. This paper shows that an argument by Jaynes to justify the usual constraint rule is unsatisfactory and investigates several alternative choices. The choice of a constraint rule is also show...
A new resolution of the Judy Benjamin problem
 Mind
"... Van Fraassen’s Judy Benjamin problem has generally been taken to show that not all rational changes of belief can be modelled in a probabilistic framework if the available update rules are restricted to Bayes’s rule and Jeffrey’s generalization thereof. But alternative rules based on distance functi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Van Fraassen’s Judy Benjamin problem has generally been taken to show that not all rational changes of belief can be modelled in a probabilistic framework if the available update rules are restricted to Bayes’s rule and Jeffrey’s generalization thereof. But alternative rules based on distance functions between probability assignments that allegedly can handle the problem seem to have counterintuitive consequences. Taking our cue from a recent proposal by Bradley, we argue that Jeffrey’s rule can solve the Judy Benjamin problem after all. Moreover, we show that the specific instance of Jeffrey’s rule that solves the Judy Benjamin problem can be underpinned by a particular distance function. Finally, we extend the set of distance functions to ones that take into account the varying degrees to which propositions may be epistemically entrenched. Often a learning experience makes us certain of a proposition we were previously uncertain of. But, as Jeffrey famously pointed out, not all learning is like that. Sometimes learning consists in a person’s becoming more certain of some proposition or
unknown title
"... Standard accounts of iterated belief revision assume a static world, about which an agent receives a sequence of observations. More recent items are assumed to have priority over less recent items. We argue that there is no reason, given a static world, for giving priority to more recent items. Inst ..."
Abstract
 Add to MetaCart
Standard accounts of iterated belief revision assume a static world, about which an agent receives a sequence of observations. More recent items are assumed to have priority over less recent items. We argue that there is no reason, given a static world, for giving priority to more recent items. Instead we suggest that a sequence of observations should be merged with the agent’s beliefs. Since observations may have differing reliability, arguably the appropriate belief change operator is prioritized merging. We develop this view here, suggesting postulates for prioritized merging, and examining existing merging operators with respect to these postulates. As well, we examine other suggested postulates for iterated revision, to determine how well they fit with the prioritized merging interpretation. All postulates for iterated revision that we examine, except for Darwiche and Pearl’s controversial C2, are consequences of our suggested postulates for prioritized merging.
Propositionvalued Random Variables as Information
, 2009
"... The notion of a proposition as a set of possible worlds or states occupies central stage in probability theory, semantics and epistemology, where it serves as the fundamental unit both of information and meaning. But this fact should not blind us to the existence of prospects with a di¤erent structu ..."
Abstract
 Add to MetaCart
(Show Context)
The notion of a proposition as a set of possible worlds or states occupies central stage in probability theory, semantics and epistemology, where it serves as the fundamental unit both of information and meaning. But this fact should not blind us to the existence of prospects with a di¤erent structure. In the paper I examine the use of random variables in particular, propositionvalued random variables in these …elds and argue that we need a general account of rational attitude formation with respect to them. 1
unknown title
"... Standard accounts of iterated belief revision assume a static world, about which an agent receives a sequence of observations. More recent items are assumed to have priority over less recent items. We argue that there is no reason, given a static world, for giving priority to more recent items. Inst ..."
Abstract
 Add to MetaCart
Standard accounts of iterated belief revision assume a static world, about which an agent receives a sequence of observations. More recent items are assumed to have priority over less recent items. We argue that there is no reason, given a static world, for giving priority to more recent items. Instead we suggest that a sequence of observations should be merged with the agent’s beliefs. Since observations may have differing reliability, arguably the appropriate belief change operator is prioritized merging. We develop this view here, suggesting postulates for prioritized merging, and examining existing merging operators with respect to these postulates. As well, we examine other suggested postulates for iterated revision, to determine how well they fit with the prioritized merging interpretation. All postulates for iterated revision that we examine, except for Darwiche and Pearl’s controversial C2, are consequences of our suggested postulates for prioritized merging.
Research Online website. A New Resolution of the Judy Benjamin Problem
"... A new resolution of the Judy Benjamin problem ..."