Results 1  10
of
20
An algebra for probabilistic databases
"... An algebra is presented for a simple probabilistic data model that may be regarded as an extension of the standard relational model. The probabilistic algebra is developed in such a way that (restricted to αacyclic database schemes) the relational algebra is a homomorphic image of it. Strictly prob ..."
Abstract

Cited by 133 (1 self)
 Add to MetaCart
An algebra is presented for a simple probabilistic data model that may be regarded as an extension of the standard relational model. The probabilistic algebra is developed in such a way that (restricted to αacyclic database schemes) the relational algebra is a homomorphic image of it. Strictly probabilistic results are emphasized. Variations on the basic probabilistic data model are discussed. The algebra is used to explicate a commonly used statistical smoothing procedure and is shown to be potentially very useful for decision support with uncertain information.
Game Theory, Maximum Entropy, Minimum Discrepancy And Robust Bayesian Decision Theory
 ANNALS OF STATISTICS
, 2004
"... ..."
Updating Probabilities
, 2002
"... As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening a ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
SetBased Bayesianism
, 1992
"... . Problems for strict and convex Bayesianism are discussed. A setbased Bayesianism generalizing convex Bayesianism and intervalism is proposed. This approach abandons not only the strict Bayesian requirement of a unique realvalued probability function in any decisionmaking context but also the re ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
. Problems for strict and convex Bayesianism are discussed. A setbased Bayesianism generalizing convex Bayesianism and intervalism is proposed. This approach abandons not only the strict Bayesian requirement of a unique realvalued probability function in any decisionmaking context but also the requirement of convexity for a setbased representation of uncertainty. Levi's Eadmissibility decision criterion is retained and is shown to be applicable in the nonconvex case. Keywords: Uncertainty, decisionmaking, maximum entropy, Bayesian methods. 1. Introduction. The reigning philosophy of uncertainty representation is strict Bayesianism. One of its central principles is that an agent must adopt a single, realvalued probability function over the events recognized as relevant to a given problem. Prescriptions for defining such a function for a given agent in a given situation range from the extreme personalism of deFinetti (1964, 1974) and Savage (1972) to the objective Bayesianism of...
Can the Maximum Entropy Principle Be Explained as a Consistency Requirement?
, 1997
"... The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathe ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathematical formulation and in intended scope, into the principle of maximum relative entropy or of minimum information. It has been claimed that these principles are singled out as unique methods of statistical inference that agree with certain compelling consistency requirements. This paper reviews these consistency arguments and the surrounding controversy. It is shown that the uniqueness proofs are flawed, or rest on unreasonably strong assumptions. A more general class of 1 inference rules, maximizing the socalled R'enyi entropies, is exhibited which also fulfill the reasonable part of the consistency assumptions. 1 Introduction In any application of probability theory to the pro...
Representation Dependence in Probabilistic Inference
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 2004
"... Nondeductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Nondeductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed
A contrast between two decision rules for use with (convex) sets of probabilities: ΓMaximin versus Eadmissibilty.
, 2002
"... ..."
Probability Update: Conditioning vs. CrossEntropy
 In Proc. Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI
, 1997
"... Conditioning is the generally agreedupon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of crossentropy minimization, to handle updates that involve uncertain information. In thi ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Conditioning is the generally agreedupon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of crossentropy minimization, to handle updates that involve uncertain information. In this paper we reexamine such a case: van Fraassen's Judy Benjamin problem [1987], which in essence asks how one might update given the value of a conditional probability. We argue thatcontrary to the suggestions in the literatureit is possible to use simple conditionalization in this case, and thereby obtain answers that agree fully with intuition. This contrasts with proposals such as crossentropy, which are easier to apply but can give unsatisfactory answers. Based on the lessons from this example, we speculate on some general philosophical issues concerning probability update. 1 INTRODUCTION How should one update one's beliefs, represented as a probability distribution Pr over some ...
Prior Information and Uncertainty in Inverse Problems
, 2001
"... Solving any inverse problem requires understanding the uncertainties in the data to know what it means to fit the data. We also need methods to incorporate dataindependent prior information to eliminate unreasonable models that fit the data. Both of these issues involve subtle choices that may ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
Solving any inverse problem requires understanding the uncertainties in the data to know what it means to fit the data. We also need methods to incorporate dataindependent prior information to eliminate unreasonable models that fit the data. Both of these issues involve subtle choices that may significantly influence the results of inverse calculations. The specification of prior information is especially controversial. How does one quantify information? What does it mean to know something about a parameter a priori? In this tutorial we discuss Bayesian and frequentist methodologies that can be used to incorporate information into inverse calculations. In particular we show that apparently conservative Bayesian choices, such as representing interval constraints by uniform probabilities (as is commonly done when using genetic algorithms, for example) may lead to artificially small uncertainties. We also describe tools from statistical decision theory that can be used to...
The Constraint Rule of the Maximum Entropy Principle
, 1995
"... The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distri ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule to equate the expectation values of certain functions with their empirical averages. There are, however, various other ways in which one can construct constraints from empirical data, which makes the maximum entropy principle lead to very different probability assignments. This paper shows that an argument by Jaynes to justify the usual constraint rule is unsatisfactory and investigates several alternative choices. The choice of a constraint rule is also show...