Results 11  20
of
24
On the Emergence of Reasons in Inductive Logic
 Journal of the IGPL
, 2001
"... We apply methods of abduction derived from propositional probabilistic reasoning to predicate probabilistic reasoning, in particular inductive logic, by treating nite predicate knowledge bases as potentially in nite propositional knowledge bases. It is shown that for a range of predicate knowledg ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We apply methods of abduction derived from propositional probabilistic reasoning to predicate probabilistic reasoning, in particular inductive logic, by treating nite predicate knowledge bases as potentially in nite propositional knowledge bases. It is shown that for a range of predicate knowledge bases (such as those typically associated with inductive reasoning) and several key propositional inference processes (in particular the Maximum Entropy Inference Process) this procedure is well de ned, and furthermore yields an explanation for the validity of the induction in terms of `reasons'. Keywords: Inductive Logic, Probabilistic Reasoning, Abduction, Maximum Entropy, Uncertain Reasoning. 1 Motivation Consider the following situation. I am sitting by a bend in a road and I start to wonder how likely it is that the next car which passes will skid on this bend. I have some knowledge which seems relevant, for example I know that if there is ice on the road then there is a good chance of a skid, and similarly if the bend is unsigned, the camber adverse, etc.. I possibly also have some knowledge of how likely it is that there is ice on the road, how likely it is that the bend is unsigned (possibly conditioned on the iciness of the road) etc.. Notice that this is generic knowledge which applies equally to any potential passing car.
On the Emergence of Reasons in
"... We apply methods of abduction derived from propositional probabilistic reasoning to predicate probabilistic reasoning, in particular inductive logic, by treating finite predicate knowledge bases as potentially infinite propositional knowledge bases. It is shown that for a range of predicate knowled ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We apply methods of abduction derived from propositional probabilistic reasoning to predicate probabilistic reasoning, in particular inductive logic, by treating finite predicate knowledge bases as potentially infinite propositional knowledge bases. It is shown that for a range of predicate knowledge bases (such as those typically associated with inductive reasoning) and several key propositional inference processes (in particular the Maximum Entropy Inference Process) this procedure is well defined, and furthermore yields an explanation for the validity of the induction in terms of `reasons'. Keywords: Inductive Logic, Probabilistic Reasoning, Abduction, Maximum Entropy, Uncertain Reasoning. 1 Motivation Consider the following situation. I am sitting by a bend in a road and I start to wonder how likely it is that the next car which passes will skid on this bend. I have some knowledge which seems relevant, for example I know that if there is ice on the road then there is a good chance of a skid, and similarly if the bend is unsigned, the camber adverse, etc.. I possibly also have some knowledge of how likely it is that there is ice on the road, how likely it is that the bend is unsigned (possibly conditioned on the iciness of the road) etc.. Notice that this is generic knowledge which applies equally to any potential passing car.
Generating Degrees of Belief from Statistical Information: An Overview
, 1993
"... Consider an agent (or expert system) with a knowledge base KB that includes statistical information (such as "90% of patients with jaundice have hepatitis"), firstorder information ("all patients with hepatitis have jaundice"), and default information ("patients with jaundice typically have a f ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Consider an agent (or expert system) with a knowledge base KB that includes statistical information (such as "90% of patients with jaundice have hepatitis"), firstorder information ("all patients with hepatitis have jaundice"), and default information ("patients with jaundice typically have a fever"). A doctor with such a KB may want to assign a degree of belief to an assertion ' such as "Eric has hepatitis". Since the actions the doctor takes may depend crucially on this degree of belief, we would like to specify a mechanism by which she can use her knowledge base to assign a degree of belief to ' in a principled manner. We have been investigating a number of techniques for doing so; in this paper we give an overview of one of them. The method, which we call the random worlds method, is a natural one: For any given domain size N , we consider the fraction of models satisfying ' among models of size N satisfying KB . If we do not know the domain size N , but know that it is large, we can approximate the degree of belief in ' given KB by taking the limit of this fraction as N goes to infinity. As we show, this approach has many desirable features. In particular, in many cases that arise in practice, the answers we get using this method provably match heuristic assumptions made in many standard AI systems.
Plausibilities of plausibilities’: an approach through circumstances. Being part I of “From ‘plausibilities of plausibilities’ to stateassignment methods” (2006), eprint arXiv:quantph/0607111
"... Probabilitylike parameters appearing in some statistical models, and their prior distributions, are reinterpreted through the notion of ‘circumstance’, a term which stands for any piece of knowledge that is useful in assigning a probability and that satisfies some additional logical properties. The ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Probabilitylike parameters appearing in some statistical models, and their prior distributions, are reinterpreted through the notion of ‘circumstance’, a term which stands for any piece of knowledge that is useful in assigning a probability and that satisfies some additional logical properties. The idea, which can be traced to Laplace and Jaynes, is that the usual inferential reasonings about the probabilitylike parameters of a statistical model can be conceived as reasonings about equivalence classes of ‘circumstances ’ — viz., real or hypothetical pieces of knowledge, like e.g. physical hypotheses, that are useful in assigning a probability and satisfy some additional logical properties — that are uniquely indexed by the probability distributions they lead to. PACS numbers: 02.50.Cw,02.50.Tt,01.70.+w MSC numbers: 03B48,62F15,60A05 If you can’t join ’em, join ’em together. 0
The LaplaceJaynes approach to induction Being part II of “From ‘plausibilities of plausibilities ’ to stateassignment methods”
, 2007
"... An approach to induction is presented, based on the idea of analysing the context of a given problem into ‘circumstances’. This approach, fully Bayesian in form and meaning, provides a complement or in some cases an alternative to that based on de Finetti’s representation theorem and on the notion o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An approach to induction is presented, based on the idea of analysing the context of a given problem into ‘circumstances’. This approach, fully Bayesian in form and meaning, provides a complement or in some cases an alternative to that based on de Finetti’s representation theorem and on the notion of infinite exchangeability. In particular, it gives an alternative interpretation of those formulae that apparently involve ‘unknown probabilities ’ or ‘propensities’. Various advantages and applications of the presented approach are discussed, especially in comparison to that based on exchangeability. Generalisations are also discussed. PACS numbers: 02.50.Cw,02.50.Tt,01.70.+w MSC numbers: 03B48,60G09,60A05 Note, to head off a common misconception, that this is in no way to introduce a “probability of a probability”. It is simply convenient to index our hypotheses by parameters [...] chosen to be numerically equal to the probabilities assigned by those hypotheses; this avoids a doubling of our notation. We could easily restate everything so that the misconception could not arise; it would only be rather clumsy notationally and tedious verbally.
A Characterization of the Language Invariant Families satisfying Spectrum Exchangeability in Polyadic Inductive Logic
, 2008
"... A necessary and sufficient condition in terms of a de Finetti style representation is given for a probability function in Polyadic Inductive Logic to satisfy being part of a Language Invariant family satisfying Spectrum Exchangeability. This theorem is then considered in relation to the unary Carnap ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A necessary and sufficient condition in terms of a de Finetti style representation is given for a probability function in Polyadic Inductive Logic to satisfy being part of a Language Invariant family satisfying Spectrum Exchangeability. This theorem is then considered in relation to the unary Carnap and NixParis Continua.
Some Limit Theorems for ME, MD and ...
"... We apply methods of abduction derived from propositional probabilistic reasoning to predicate probabilistic reasoning, in particular inductive logic, by treating nite predicate knowledge bases as potentially innite propositional knowledge bases. Full and detailed proofs are given to show that for a ..."
Abstract
 Add to MetaCart
We apply methods of abduction derived from propositional probabilistic reasoning to predicate probabilistic reasoning, in particular inductive logic, by treating nite predicate knowledge bases as potentially innite propositional knowledge bases. Full and detailed proofs are given to show that for a range of predicate knowledge bases (such as those typically associated with inductive reasoning) and several key propositional inference processes (in particular the Maximum Entropy Inference Process) this procedure is well dened, and furthermore yields an explanation for the validity of the induction in terms of `reasons'. Motivation Consider the following situation. I am sitting by a bend in a road and I start to wonder how likely it is that the next car which passes will skid on this bend. I have some knowledge which seems relevant, for example I know that if there is ice on the road then there is a good chance of a skid, and similarly if the bend is unsigned, the camber adverse, etc.. I possibly also have some knowledge of how likely it is that there is ice on the road, how likely it is that the bend is unsigned (possibly conditioned on the iciness of the road) etc.. Notice that this is generic knowledge which applies equally to any potential passing car. Supported by a EPRSC Research Associateship y Supported by an Egyptian Government Scholarship, File No. 7083 1 Armed with this knowledge base I may now form some opinion as to the likely outcome when the next car passes. Subsequently several cars pass by. I note the results and in consequence possibly revise my opinion as to the likelihood of the next car through skidding. Clearly we are all capable of forming opinions, or beliefs, in this way, but is it possible to formalize this inductive process, this pro...
Dirichlet Mixtures for Query Estimation in Information Retrieval
, 2005
"... Treated as small samples of text, user queries require smoothing to better estimate the probabilities of their true model. Traditional techniques to perform this smoothing include automatic query expansion and local feedback. This paper applies the bioinformatics smoothing technique, Dirichlet mixtu ..."
Abstract
 Add to MetaCart
Treated as small samples of text, user queries require smoothing to better estimate the probabilities of their true model. Traditional techniques to perform this smoothing include automatic query expansion and local feedback. This paper applies the bioinformatics smoothing technique, Dirichlet mixtures, to the task of query estimation. We discuss Dirichlet mixtures ’ relation to relevance models, probabilistic latent semantic indexing, and other information retrieval techniques. We describe how Dirichlet mixtures give insight into the value of retaining the original query in query expansion. On the task of adhoc retrieval, query estimation by Dirichlet mixtures generally does not perform well, but aspects of its behavior show promise. Experiments where the original query is mixed with the models estimated by relevance models and Dirichlet mixtures confirms that query estimation methods should not fully discount the prior information held in a query.