Results 1  10
of
12
On the Hardness of Approximate Reasoning
, 1996
"... Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider va ..."
Abstract

Cited by 289 (13 self)
 Add to MetaCart
Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to modelcounting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the e...
The Power of Sampling in Knowledge Discovery
, 1993
"... We consider the problem of approximately verifying the truth of sentences of tuple relational calculus in a given relation M by considering only a random sample of M . We define two different measures for the error of a universal sentence in a relation. For a set of n universal sentences each with a ..."
Abstract

Cited by 62 (2 self)
 Add to MetaCart
We consider the problem of approximately verifying the truth of sentences of tuple relational calculus in a given relation M by considering only a random sample of M . We define two different measures for the error of a universal sentence in a relation. For a set of n universal sentences each with at most k universal quantifiers, we give upper and lower bounds for the sample sizes required for having a high probability that all the sentences with error at least " can be detected as false by considering the sample. The sample sizes are O((ln n)=") or O((jM j 1\Gamma1=k ln n)="), depending on the error measure used. We also consider universalexistential sentences. Computing Reviews Categories and Subject Descriptors: H.3.3 [Information Systems]: Information Storage and Retrieval  Information Search and Retrieval F.2.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity  Nonnumerical Algorithms and Problems G.3 [Mathematics of Computing]: Probability and Sta...
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all w ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
A Logic for Default Reasoning About Probabilities
, 1998
"... A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the in uence of statistical informa ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the in uence of statistical information on the formation of subjective beliefs. Cross entropy minimization is a key element in these semantics, the use of which is justified by showing that the resulting logic exhibits some very reasonable properties.
ZeroOne Laws For Modal Logic
 Annals of Pure and Applied Logic 69
, 1994
"... We show that a 01 law holds for propositional modal logic, both for structure validity and frame validity. In the case of structure validity, the result follows easily from the wellknown 01 law for firstorder logic. However, our proof gives considerably more information. It leads to an elegant ax ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We show that a 01 law holds for propositional modal logic, both for structure validity and frame validity. In the case of structure validity, the result follows easily from the wellknown 01 law for firstorder logic. However, our proof gives considerably more information. It leads to an elegant axiomatization for almostsure structure validity and to sharper complexity bounds. Since frame validity can be reduced to a \Pi 1 1 formula, the 01 law for frame validity helps delineate when 01 laws exist for secondorder logics. A preliminary version of this paper appears in Proceedings of the Seventh Annual IEEE Symposium on Logic in Computer Science, 1992. This version is almost identical to one that appears in a special issue of Annals of Pure and Applied Logic (vol. 69, 1994, pp. 157193) devoted to the papers of this conference. y Part of the work of the first author was performed while he was on sabbatical at the University of Toronto. z The work of the second author was com...
Asymptotic Conditional Probabilities: The Nonunary Case
 J. SYMBOLIC LOGIC
, 1993
"... Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences ' and `, we consider the structures with domain f1; : : : ; Ng that satisfy `, and compute the fra ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences ' and `, we consider the structures with domain f1; : : : ; Ng that satisfy `, and compute the fraction of them in which ' is true. We then consider what happens to this fraction as N gets large. This extends the work on 01 laws that considers the limiting probability of firstorder sentences, by considering asymptotic conditional probabilities. As shown by Liogon'kii [Lio69], if there is a nonunary predicate symbol in the vocabulary, asymptotic conditional probabilities do not always exist. We extend this result to show that asymptotic conditional probabilities do not always exist for any reasonable notion of limit. Liogon'kii also showed that the problem of deciding whether the limit exists is undecidable. We analyze the complexity of three problems with respect to this limit: deciding whether it is welldefined, whether it exists, and whether it lies in some nontrivial interval. Matching upper and lower bounds are given for all three problems, showing them to be highly undecidable.
Asymptotic Conditional Probabilities: The Unary Case
, 1993
"... Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences ' and `, we consider the structures with domain f1; : : : ; Ng that satisfy `, and compute the fracti ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences ' and `, we consider the structures with domain f1; : : : ; Ng that satisfy `, and compute the fraction of them in which ' is true. We then consider what happens to this fraction as N gets large. This extends the work on 01 laws that considers the limiting probability of firstorder sentences, by considering asymptotic conditional probabilities. As shown by Liogon'kii[31] and Grove, Halpern, and Koller [22], in the general case, asymptotic conditional probabilities do not always exist, and most questions relating to this issue are highly undecidable. These results, however, all depend on the assumption that ` can use a nonunary predicate symbol. Liogon'kii [31] shows that if we condition on formulas ` involving unary predicate symbols only (but no equality or constant symbols), then the asymptotic conditional probability does exist and can be effectively computed. This is the case even if we place no corresponding restrictions on '. We extend this result here to the case where ` involves equality and constants. We show that the complexity of computing the limit depends on various factors, such as the depth of quantifier nesting, or whether the vocabulary is finite or infinite. We completely characterize the complexity of the problem in the different cases, and show related results for the associated approximation problem.
A logic for inductive probabilistic reasoning
 Synthese
, 2005
"... Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70 % of As are Bs ” and “a is an A ” infer that a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70 % of As are Bs ” and “a is an A ” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of crossentropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e. by inductive probabilistic reasoning. In this paper a formal framework for inductive probabilistic reasoning is developed: syntactically it consists of an extension of the language of firstorder predicate logic that allows to express statements about both statistical and subjective probabilities. Semantics for this representation language are developed that give rise to two distinct entailment relations: a relation = that models strict, probabilistically valid, inferences, and a relation  ≈ that models inductive probabilistic inferences. The inductive entailment relation is obtained by implementing crossentropy minimization in a preferred model semantics. A main objective of our approach is to ensure that for both entailment relations complete proof systems exist. This is achieved by allowing probability distributions in our semantic models that use nonstandard probability values. A number of results are presented that show that in several important aspects the resulting logic behaves just like a logic based on realvalued probabilities alone. 1
Generating Degrees of Belief from Statistical Information: An Overview
, 1993
"... Consider an agent (or expert system) with a knowledge base KB that includes statistical information (such as "90% of patients with jaundice have hepatitis"), firstorder information ("all patients with hepatitis have jaundice"), and default information ("patients with jau ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Consider an agent (or expert system) with a knowledge base KB that includes statistical information (such as "90% of patients with jaundice have hepatitis"), firstorder information ("all patients with hepatitis have jaundice"), and default information ("patients with jaundice typically have a fever"). A doctor with such a KB may want to assign a degree of belief to an assertion ' such as "Eric has hepatitis". Since the actions the doctor takes may depend crucially on this degree of belief, we would like to specify a mechanism by which she can use her knowledge base to assign a degree of belief to ' in a principled manner. We have been investigating a number of techniques for doing so; in this paper we give an overview of one of them. The method, which we call the random worlds method, is a natural one: For any given domain size N , we consider the fraction of models satisfying ' among models of size N satisfying KB . If we do not know the domain size N , but know that it is large, we can approximate the degree of belief in ' given KB by taking the limit of this fraction as N goes to infinity. As we show, this approach has many desirable features. In particular, in many cases that arise in practice, the answers we get using this method provably match heuristic assumptions made in many standard AI systems.