Results 1  10
of
14
Updating Probabilities
, 2002
"... As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in t ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
Secrecy in multiagent systems
"... We introduce a general framework for reasoning about secrecy requirements in multiagent systems. Because secrecy requirements are closely connected with the knowledge of individual agents of a system, our framework employs the modal logic of knowledge within the context of the wellstudied runs and ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
We introduce a general framework for reasoning about secrecy requirements in multiagent systems. Because secrecy requirements are closely connected with the knowledge of individual agents of a system, our framework employs the modal logic of knowledge within the context of the wellstudied runs and systems framework. Put simply, “secrets ” are facts about a system that lowlevel agents are never allowed to know. The framework presented here allows us to formalize this intuition precisely, in a way that is much in the spirit of Sutherland’s notion of nondeducibility. Several wellknown attempts to characterize the absence of information flow, including separability, generalized noninterference, and nondeducibility on strategies, turn out to be special cases of our definition of secrecy. However, our approach lets us go well beyond these definitions. It can handle probabilistic secrecy in a clean way, and it suggests generalizations of secrecy that may be useful for dealing with resourcebounded reasoning and with issues such as downgrading of information.
Updating Beliefs with Incomplete Observations
"... Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued). This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that co ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued). This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this paper we propose a new method for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no assumptions about the socalled incompleteness mechanism that associates complete with incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we use only coherence arguments to turn prior into posterior (updated) probabilities. In general, this new approach to updating produces lower and upper posterior probabilities and previsions (expectations), as well as partially determinate decisions. This is a logical consequence of the existing ignorance about the incompleteness mechanism. As an example, we use the new updating method to properly address the apparent paradox in the `Monty Hall' puzzle. More importantly, we apply it to the problem of classification of new evidence in probabilistic expert systems, where it leads to a new, socalled conservative updating rule.
Conservative inference rule for uncertain reasoning under incompleteness
 Journal of Artificial Intelligence Research
"... In this paper we formulate the problem of inference under incomplete information in very general terms. This includes modelling the process responsible for the incompleteness, which we call the incompleteness process. We allow the process ’ behaviour to be partly unknown. Then we use Walley’s theory ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
In this paper we formulate the problem of inference under incomplete information in very general terms. This includes modelling the process responsible for the incompleteness, which we call the incompleteness process. We allow the process ’ behaviour to be partly unknown. Then we use Walley’s theory of coherent lower previsions, a generalisation of the Bayesian theory to imprecision, to derive the rule to update beliefs under incompleteness that logically follows from our assumptions, and that we call conservative inference rule. This rule has some remarkable properties: it is an abstract rule to update beliefs that can be applied in any situation or domain; it gives us the opportunity to be neither too optimistic nor too pessimistic about the incompleteness process, which is a necessary condition to draw reliable while strong enough conclusions; and it is a coherent rule, in the sense that it cannot lead to inconsistencies. We give examples to show how the new rule can be applied in expert systems, in parametric statistical inference, and in pattern classification, and discuss more generally the view of incompleteness processes defended here as well as some of its consequences. 1.
Updating With Incomplete Observations
 Uncertainty in Artificial Intelligence: Proceedings of the Nineteenth Conference (UAI2003
, 2003
"... Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued) . This is a fundamental problem, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that commonly u ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete (or setvalued) . This is a fundamental problem, and of particular interest for Bayesian networks. Recently, Gr unwald and Halpern have shown that commonly used updating strategies fail here, except under very special assumptions. We propose a new rule for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no or weak assumptions about the socalled incompleteness mechanism that produces incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we derive a new updating rule using coherence arguments. In general, our rule produces lower posterior probabilities, as well as partially determinate decisions.
Utility, informativity and protocols
 Proceedings of LOFT 5: Logic and the Foundations of the Theory of Games and Decisions
, 2001
"... this paper is to extend this investigation in several ways. The second contribution is to measure the relevance/utility of nonpartitional questions, and show how di#erent proposals (using either protocols or likelihood functions) come down to the same thing ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
this paper is to extend this investigation in several ways. The second contribution is to measure the relevance/utility of nonpartitional questions, and show how di#erent proposals (using either protocols or likelihood functions) come down to the same thing
SemiParametric Regression With Coarsely Observed Regressors
"... Semiparametric regression models with coarsely observed regressors are considered. Assuming coarsening at random, p nconsistent estimators are given, when the coarsening mechanism is either known or specified by a parametric model. Efficiency is discussed. ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Semiparametric regression models with coarsely observed regressors are considered. Assuming coarsening at random, p nconsistent estimators are given, when the coarsening mechanism is either known or specified by a parametric model. Efficiency is discussed.
Building knowledgebased systems by credal networks: a tutorial
 ADVANCES IN MATHEMATICS RESEARCH
, 2010
"... Knowledgebased systems are computer programs achieving expertlevel competence in solving problems for specific task areas. This chapter is a tutorial on the implementation of this kind of systems in the framework of credal networks. Credal networks are a generalization of Bayesian networks where c ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Knowledgebased systems are computer programs achieving expertlevel competence in solving problems for specific task areas. This chapter is a tutorial on the implementation of this kind of systems in the framework of credal networks. Credal networks are a generalization of Bayesian networks where credal sets, i.e., closed convex sets of probability measures, are used instead of precise probabilities. This allows for a more flexible model of the knowledge, which can represent ambiguity, contrast and contradiction in a natural and realistic way. The discussion guides the reader through the different steps involved in the specification of a system, from the evocation and elicitation of the knowledge to the interaction with the system by adequate inference algorithms. Our approach is characterized by a sharp distinction between the domain knowledge and the process linking this knowledge to the perceived evidence, which we call the observational process. This distinction leads to a very flexible representation of both domain knowledge and knowledge about the way the information is collected, together with a technique to aggregate information coming from different sources. The overall procedure is illustrated throughout the chapter by a simple knowledgebased system for the prediction of the result of a football match.
Relative Coarsening at Random.
, 1997
"... Many types of data are often incompletely observed. How incompletely is typically randomly determined. Heitjan and Rubin (Annals of Statistics (1991)) proposed a condition, "coarsened at random" or CAR, ensuring ignorability of this randomness in discrete sample spaces. In general sample spaces CAR ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Many types of data are often incompletely observed. How incompletely is typically randomly determined. Heitjan and Rubin (Annals of Statistics (1991)) proposed a condition, "coarsened at random" or CAR, ensuring ignorability of this randomness in discrete sample spaces. In general sample spaces CAR comes in two flavors according to whether it is defined in terms of probabilities or densities. In this paper, CAR defined in terms of densities, called relative CAR, is discussed as a condition for ignorability in a statistical model allowing for partial observation of random elements determining the degree of incompleteness in the observation of the data. Keywords: Incomplete observations, coarsening, coarsened at random, ignorability, censoring. 2 S.F. Nielsen 1 Introduction. The traditional model for incomplete observations is a manytoone mapping of the intended (complete) observation to the actual (incomplete) observation; see for instance Sundberg (1971) and Dempster, Laird, and ...
On the testability of the CAR assumption
 The Annals of Statistics
, 2004
"... In recent years a popular nonparametric model for coarsened data is an assumption on the coarsening mechanism called coarsening at random (CAR). It has been conjectured in several papers that this assumption cannot be tested by the data, that is, the assumption does not restrict the possible distrib ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In recent years a popular nonparametric model for coarsened data is an assumption on the coarsening mechanism called coarsening at random (CAR). It has been conjectured in several papers that this assumption cannot be tested by the data, that is, the assumption does not restrict the possible distributions of the data. In this paper we will show that this conjecture is not always true; an example will be current status data. We will also give conditions when the conjecture is true, and in doing so, we will introduce a generalized version of the CAR assumption. As an illustration, we retrieve the wellknown result that the CAR assumption cannot be tested in the case of rightcensored data.