Results 1  10
of
348
An Analysis of FirstOrder Logics of Probability
 Artificial Intelligence
, 1990
"... : We consider two approaches to giving semantics to firstorder logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as "The probability that a randomly chosen bird flies is greater ..."
Abstract

Cited by 316 (18 self)
 Add to MetaCart
(Show Context)
: We consider two approaches to giving semantics to firstorder logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as "The probability that a randomly chosen bird flies is greater than .9." The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief, such as "The probability that Tweety (a particular bird) flies is greater than .9." We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about prob...
A concern for evidence and a phylogenetic hypothesis of relationships among Epicrates (Boidae
, 1989
"... Abstract.—Character congruence, the principle of using all the relevant data, and character independence are important concepts in phylogenetic inference, because they relate directly to the evidence on which hypotheses are based. Taxonomic congruence, which is agreement among patterns of taxonomic ..."
Abstract

Cited by 177 (5 self)
 Add to MetaCart
Abstract.—Character congruence, the principle of using all the relevant data, and character independence are important concepts in phylogenetic inference, because they relate directly to the evidence on which hypotheses are based. Taxonomic congruence, which is agreement among patterns of taxonomic relationships, is less important, because its connection to the underlying character evidence is indirect and often imperfect. Also, taxonomic congruence is difficult to justify, because of the arbitrariness involved in choosing a consensus method and index with which to estimate agreement. High levels of character congruence were observed among 89 biochemical and morphological synapomorphies scored on 10 species of Epicrates. Such agreement is consistent with the phylogenetic interpretation attached to the resulting hypothesis, which is a consensus of two equally parsimonious cladograms: (cenchria (angulifer (striatus {(chrysogaster, exsul) (inornatus, subflavus) (gracilis (fordii, monensis)))))). Relatively little (11.4%) of the character incongruence was due to the disparity between the biochemical and morphological data sets. Each of the clades in the consensus cladogram was confirmed by two or more unique and unreversed novelties, and six of the eight clades were corroborated by biochemical and morphological evidence. Such com
Relevance theory
 Handbook of Pragmatics
, 2004
"... This paper outlines the main assumptions of relevance theory (Sperber & Wilson 1985, 1995, 1998, 2002, Wilson & Sperber 2002), an inferential approach to pragmatics. Relevance theory is based on a definition of relevance and two principles of relevance: a Cognitive Principle (that human cogn ..."
Abstract

Cited by 168 (3 self)
 Add to MetaCart
This paper outlines the main assumptions of relevance theory (Sperber & Wilson 1985, 1995, 1998, 2002, Wilson & Sperber 2002), an inferential approach to pragmatics. Relevance theory is based on a definition of relevance and two principles of relevance: a Cognitive Principle (that human cognition is geared to the maximisation of relevance), and a Communicative Principle (that utterances create expectations of optimal relevance). We explain the motivation for these principles and illustrate their application to a variety of pragmatic problems. We end by considering the implications of this relevancetheoretic approach for the architecture of the mind. 1
Probabilistic Logic Programming
, 1992
"... Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian ..."
Abstract

Cited by 159 (9 self)
 Add to MetaCart
Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian [5, 6, 49, 50], Kifer et al [29, 30, 31]) have restricted themselves to nonprobabilistic semantical characterizations. In this paper, we take a few steps towards rectifying this situation. We define a logic programming language that is syntactically similar to the annotated logics of [5, 6], but in which the truth values are interpreted probabilistically. A probabilistic model theory and fixpoint theory is developed for such programs. This probabilistic model theory satisfies the requirements proposed by Fenstad [16] for a function to be called probabilistic. The logical treatment of probabilities is complicated by two facts: first, that the connectives cannot be interpreted truth function...
Rationality and intelligence
 Artificial Intelligence
, 1997
"... The longterm goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. This paper outlines a ..."
Abstract

Cited by 106 (1 self)
 Add to MetaCart
The longterm goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. This paper outlines a gradual evolution in our formal conception of intelligence that brings it closer to our informal conception and simultaneously reduces the gap between theory and practice. 1 Artificial Intelligence AI is a field in which the ultimate goal has often been somewhat illdefined and subject to dispute. Some researchers aim to emulate human cognition, others aim at the creation of
Probabilistic Reasoning in Terminological Logics
, 1994
"... In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usua ..."
Abstract

Cited by 86 (5 self)
 Add to MetaCart
In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usual modeltheoretic semantics for terminological logics are extended to define interpretations for the resulting probabilistic language. It is our main objective to find an adequate modelling of the way the two kinds of probabilistic knowledge are combined in commonsense inferences of probabilistic statements. Cross entropy minimization is a technique that turns out to be very well suited for achieving this end. 1 INTRODUCTION Terminological knowledge representation languages (concept languages, terminological logics) are used to describe hierarchies of concepts. While the expressive power of the various languages that have been defined (e.g. KLONE [BS85] ALC [SSS91]) varies greatly in that ...
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
 IEEE Transactions on Information Theory
, 1998
"... The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic conditi ..."
Abstract

Cited by 82 (8 self)
 Add to MetaCart
The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's mi...
Probabilistic Deductive Databases
, 1994
"... Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased program ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased programming, uncertainty, etc., offer a convenient framework for KB system development. But they require the user to be well versed with the low level details of system implementation. The manner in which uncertainty is handled has little mathematical basis. There is no decent notion of query optimization, forcing the user to take the responsibility for an efficient implementation of the KB system. We contend KB system development can and should take advantage of the deductive database technology, which overcomes most of the above limitations. An important problem here is to extend deductive databases into providing a systematic basis for rulebased programming with imperfect knowledge. In this paper, we are interested in an exension handling probabilistic knowledge.
A preliminary report on a general theory of inductive inference
, 1960
"... Some preliminary work is presented on a very general new theory of inductive inference. The extrapolation of an ordered sequence of symbols is implemented by computing the a priori probabilities of various sequences of symbols. The a priori probability of a sequence is obtained by considering a univ ..."
Abstract

Cited by 64 (9 self)
 Add to MetaCart
(Show Context)
Some preliminary work is presented on a very general new theory of inductive inference. The extrapolation of an ordered sequence of symbols is implemented by computing the a priori probabilities of various sequences of symbols. The a priori probability of a sequence is obtained by considering a universal Turing machine whose output is the sequence in question. An approximation to the a priori probability is given by the shortest input to the machine that will give the desired output. A more exact formulation is given, and it is made somewhat plausible that extrapolation probabilities obtained will be largely independent of just which universal Turing machine was used, providing that the sequence to be extrapolated has an adequate amount of information in it. Some examples are worked out to show the application of the method to specific problems. Applications of the method to curve fitting and other continuous problems are discussed to some extent. Some alternative