Results 1  10
of
43
On the Hardness of Approximate Reasoning
, 1996
"... Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider va ..."
Abstract

Cited by 225 (13 self)
 Add to MetaCart
Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to modelcounting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the e...
Probabilistic Reasoning in Terminological Logics
, 1994
"... In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usua ..."
Abstract

Cited by 75 (5 self)
 Add to MetaCart
In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usual modeltheoretic semantics for terminological logics are extended to define interpretations for the resulting probabilistic language. It is our main objective to find an adequate modelling of the way the two kinds of probabilistic knowledge are combined in commonsense inferences of probabilistic statements. Cross entropy minimization is a technique that turns out to be very well suited for achieving this end. 1 INTRODUCTION Terminological knowledge representation languages (concept languages, terminological logics) are used to describe hierarchies of concepts. While the expressive power of the various languages that have been defined (e.g. KLONE [BS85] ALC [SSS91]) varies greatly in that ...
Learning to reason
 Journal of the ACM
, 1994
"... Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the ..."
Abstract

Cited by 57 (24 self)
 Add to MetaCart
Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In this framework, the intelligent agent is given access to its favorite learning interface, and is also given a grace period in which it can interact with this interface and construct a representation KB of the world W. The reasoning performance is measured only after this period, when the agent is presented with queries � from some query language, relevant to the world, and has to answer whether W implies �. The approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the “world”. Since the agent interacts with the world when constructing its knowledge representation it can choose a representation that is useful for the task at hand. Moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with. We show how previous results from learning theory and reasoning fit into this framework and
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all w ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
Attacks on privacy and de finetti’s theorem
 In SIGMOD
, 2009
"... In this paper we present a method for reasoning about privacy using the concepts of exchangeability and deFinetti’s theorem. We illustrate the usefulness of this technique by using it to attack a popular data sanitization scheme known as Anatomy. We stress that Anatomy is not the only sanitization s ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
In this paper we present a method for reasoning about privacy using the concepts of exchangeability and deFinetti’s theorem. We illustrate the usefulness of this technique by using it to attack a popular data sanitization scheme known as Anatomy. We stress that Anatomy is not the only sanitization scheme that is vulnerable to this attack. In fact, any scheme that uses the random worlds model, i.i.d. model, or tupleindependent model needs to be reevaluated. The difference between the attack presented here and others that have been proposed in the past is that we do not need extensive background knowledge. An attacker only needs to know the nonsensitive attributes of one individual in the data, and can carry out this attack just by building a machine learning model over the sanitized data. The reason this attack is successful is that it exploits a subtle flaw in the way prior work computed the probability of disclosure of a sensitive attribute. We demonstrate this theoretically, empirically, and with intuitive examples. We also discuss how this generalizes to many other privacy schemes.
Learning Default Concepts
 In Proceedings of the Tenth Canadian Conference on Artificial Intelligence (CSCSI94
, 1994
"... Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learn ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learning such default concepts from observational data. We first model the underlying performance task  classifying incomplete examples  as a probabilistic process that passes random test examples through a "blocker" that can hide object attributes from the classifier. We then address the task of learning accurate default concepts from random training examples. After surveying the learning techniques that have been proposed for this task in the machine learning and knowledge representation literatures, and investigating their relative merits, we present a more dataefficient learning technique, developed from wellknown statistical principles. Finally, we extend Valiant's pac learning framework to ...
IntervalValued Probabilities
, 1998
"... 0 =h 0 in the diagram. The sawtooth line reflects the fact that even when the principle of indifference can be applied, there may be arguments whose strength can be bounded no more precisely than by an adjacent pair of indifference arguments. Note that a=h in the diagram is bounded numerically on ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
0 =h 0 in the diagram. The sawtooth line reflects the fact that even when the principle of indifference can be applied, there may be arguments whose strength can be bounded no more precisely than by an adjacent pair of indifference arguments. Note that a=h in the diagram is bounded numerically only by 0.0 and the strength of a 00 =h 00 . Keynes' ideas were taken up by B. O. Koopman [14, 15, 16], who provided an axiomatization for Keynes' probability values. The axioms are qualitative, and reflect what Keynes said about probability judgment. (It should be remembered that for Keynes probability judgment was intended to be objective in the sense that logic is objective. Although different people may accept different premises, whether or not a conclusion follows logically from a given set of premises is objective. Though Ramsey [26] attacked this aspect of Keynes' theory, it can be argued
Belief change as propositional update
 Cognitive Science
, 1997
"... Publication details, including instructions for authors and subscription information: ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Publication details, including instructions for authors and subscription information:
Using FirstOrder Probability Logic for the Construction of Bayesian Networks
, 1993
"... We present a mechanism for constructing graphical models, specifically Bayesian networks, from a knowledge base of general probabilistic information. The unique feature of our approach is that it uses a powerful firstorder probabilistic logic for expressing the general knowledge base. This logic al ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
We present a mechanism for constructing graphical models, specifically Bayesian networks, from a knowledge base of general probabilistic information. The unique feature of our approach is that it uses a powerful firstorder probabilistic logic for expressing the general knowledge base. This logic allows for the representation of a wide range of logical and probabilistic information. The model construction procedure we propose uses notions from direct inference to identify pieces of local statistical information from the knowledge base that are most appropriate to the particular event we want to reason about. These pieces are composed to generate a joint probability distribution specified as a Bayesian network. Although there are fundamental difficulties in dealing with fully general knowledge, our procedure is practical for quite rich knowledge bases and it supports the construction of a far wider range of networks than allowed for by current template technology. 1 Introduction The de...
Generating New Beliefs From Old
, 1994
"... In previous work [BGHK92, BGHK93], we have studied the randomworlds approacha particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (firstorder, statistical, and default) information. But allow ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In previous work [BGHK92, BGHK93], we have studied the randomworlds approacha particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (firstorder, statistical, and default) information. But allowing a knowledge base to contain only objective information is sometimes limiting. We occasionally wish to include information about degrees of belief in the knowledge base as well, because there are contexts in which old beliefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective information to one that can make use of degrees of belief as well. All of our techniques are based on wellknown approaches, such as crossentropy. We discuss general connections between the techniques and in particular show that, although conceptually and techn...