Results 1 
7 of
7
Updating Probabilities
, 2002
"... As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in t ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
Can the Maximum Entropy Principle Be Explained as a Consistency Requirement?
, 1997
"... The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathe ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathematical formulation and in intended scope, into the principle of maximum relative entropy or of minimum information. It has been claimed that these principles are singled out as unique methods of statistical inference that agree with certain compelling consistency requirements. This paper reviews these consistency arguments and the surrounding controversy. It is shown that the uniqueness proofs are flawed, or rest on unreasonably strong assumptions. A more general class of 1 inference rules, maximizing the socalled R'enyi entropies, is exhibited which also fulfill the reasonable part of the consistency assumptions. 1 Introduction In any application of probability theory to the pro...
The Constraint Rule of the Maximum Entropy Principle
, 1995
"... The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distri ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule to equate the expectation values of certain functions with their empirical averages. There are, however, various other ways in which one can construct constraints from empirical data, which makes the maximum entropy principle lead to very different probability assignments. This paper shows that an argument by Jaynes to justify the usual constraint rule is unsatisfactory and investigates several alternative choices. The choice of a constraint rule is also show...
Objective Bayesianism, Bayesian Conditionalisation
, 2008
"... Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief. One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective
Objective Bayesianism with predicate languages. Synthese
, 2008
"... Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to firstorder logical languages. It is argued that the objective Bayesian should choose a probability func ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to firstorder logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequencyinduced probability function which generalises the λ = 0 function of Carnap’s continuum of inductive methods.
Plausibilities of plausibilities’: an approach through circumstances. Being part I of “From ‘plausibilities of plausibilities’ to stateassignment methods” (2006), eprint arXiv:quantph/0607111
"... Probabilitylike parameters appearing in some statistical models, and their prior distributions, are reinterpreted through the notion of ‘circumstance’, a term which stands for any piece of knowledge that is useful in assigning a probability and that satisfies some additional logical properties. The ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Probabilitylike parameters appearing in some statistical models, and their prior distributions, are reinterpreted through the notion of ‘circumstance’, a term which stands for any piece of knowledge that is useful in assigning a probability and that satisfies some additional logical properties. The idea, which can be traced to Laplace and Jaynes, is that the usual inferential reasonings about the probabilitylike parameters of a statistical model can be conceived as reasonings about equivalence classes of ‘circumstances ’ — viz., real or hypothetical pieces of knowledge, like e.g. physical hypotheses, that are useful in assigning a probability and satisfy some additional logical properties — that are uniquely indexed by the probability distributions they lead to. PACS numbers: 02.50.Cw,02.50.Tt,01.70.+w MSC numbers: 03B48,62F15,60A05 If you can’t join ’em, join ’em together. 0
Deceptive Updating and Minimal Information Methods
"... The technique of minimizing information (infomin) has been commonly employed as a general method for both choosing and updating a subjective probability function. We argue that, in a wide class of cases, the use of infomin methods fails to cohere with our standard conception of rational degrees of b ..."
Abstract
 Add to MetaCart
The technique of minimizing information (infomin) has been commonly employed as a general method for both choosing and updating a subjective probability function. We argue that, in a wide class of cases, the use of infomin methods fails to cohere with our standard conception of rational degrees of belief. We introduce the notion of a deceptive updating method, and argue that nondeceptiveness is a necessary condition for rational coherence. Infomin has been criticized on the grounds that there are no higher order probabilities that ‘support ’ it, but the appeal to higher order probabilities is a substantial assumption that some might reject. The elementary arguments from deceptiveness do not rely on this assumption. While deceptiveness implies lack of higher order support, the converse does not, in general, hold, which indicates that deceptiveness is a more objectionable property. We offer a new proof of the claim that infomin updating of any strictlypositive prior with respect to conditionalprobability constraints is deceptive. In the case of expectedvalue constraints, infomin updating of the uniform prior is deceptive for some random variables, but not for others. We establish both a necessary condition and a sufficient condition (which extends the scope of the phenomenon beyond cases previously considered) for deceptiveness in this setting. Along the way, we clarify the relation which obtains between the strong notion of higher order support, in which the higher order probability is defined over the full space of first order probabilities, and the apparently weaker notion, in which it is defined over some smaller parameter space. We show that under certain natural assumptions, the two are equivalent. Finally, we offer an interpretation of Jaynes, according to which his own appeal to infomin methods avoids the incoherencies discussed in this paper. 1.