Results 1  10
of
13
Learning Stochastic Logic Programs
, 2000
"... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder r ..."
Abstract

Cited by 1163 (77 self)
 Add to MetaCart
(Show Context)
Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder rangerestricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and nearmaximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing userdefined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge.
The plurality of Bayesian measures of confirmation and the problemof measure sensitivity
 Philosophy of Science 66 (Proceedings), S362–S378
, 1999
"... Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmati ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
(Show Context)
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of Bayesian confirmationtheoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. 1 Preliminaries. 1.1 Terminology, Notation, and Basic Assumptions The present paper is concerned with the degree of incremental confirmation provided by evidential propositions E for hypotheses under test H, givenbackground knowledge K, according to relevance measures of degree of confirmation c. Wesaythatc is a relevance measure of degree of confirmation if and only if c satisfies the following constraints, in cases where E confirms, disconfirms, or is confirmationally irrelevant to H, given background knowledge K. 1
Distinguishing Exceptions from Noise in NonMonotonic Learning

, 1996
"... It is important for a learning program to have a reliable method of deciding whether to treat errors as noise or to include them as exceptions within a growing firstorder theory. We explore the use of an informationtheoretic measure to decide this problem within the nonmonotonic learning frame ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
It is important for a learning program to have a reliable method of deciding whether to treat errors as noise or to include them as exceptions within a growing firstorder theory. We explore the use of an informationtheoretic measure to decide this problem within the nonmonotonic learning framework defined by ClosedWorldSpecialisation. The approach adopted uses a model that consists of a reference Turing machine which accepts an encoding of a theory and proofs on its input tape and generates the observed data on the output tape. Within this model, the theory is said to "compress" data if the length of the input tape is shorter than that of the output tape. Data found to be incompressible are deemed to be "noise".
On Bayesian Measures of Evidential Support: Theoretical and Empirical Issues*
, 2006
"... Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of con ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of confirmation and we will argue that it overcomes some limitations of the currently available approaches on two grounds: (i) a theoretical analysis of the confirmation relation seen as an extension of logical deduction and (ii) an empirical comparison of competing measures in an experimental inquiry concerning inductive reasoning in a probabilistic setting. 1. Rival Bayesian Measures of Confirmation. Judgments concerning the support that a piece of information brings to a hypothesis are commonly required in scientific research as well as in other domains (medicine, law), and a major aim of a theory of inductive reasoning is to provide a proper foundation to such judgments. Within the Bayesian approach to inductive reasoning, an attempt to measure degrees of confirmation, or evidential support, should reflect, and extend, a basic qualitative view of confirmation—labeled the “clas
The Justification of Logical Theories based on Data Compression
 MACHINE INTELLIGENCE 13
, 1994
"... Nondemonstrative or inductive reasoning is a crucial component in the skills of a learner. A leading candidate for this form of reasoning involves the automatic formation of hypotheses. Initial successes in the construction of propositional theories have now been followed by algorithms that attempt ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Nondemonstrative or inductive reasoning is a crucial component in the skills of a learner. A leading candidate for this form of reasoning involves the automatic formation of hypotheses. Initial successes in the construction of propositional theories have now been followed by algorithms that attempt to generalise sentences in the predicate calculus. An important defect in these newgeneration systems is the lack of a clear model for theory justification. In this paper we describe a method of evaluating the significance of a hypothesis based on the degree to which it allows compression of the observed data with respect to prior knowledge. This can be measured by comparing the lengths of the input and output tapes of a reference Turing machine which will generate the examples from the hypothesis and a set of derivational proofs. The model extends an earlier approach of Muggleton by allowing for noise. The truth values of noisy instances are switched by making use of correction codes. The utility of compression as a significance measure is evaluated empirically in three independent domains. In particular, the results show that the existence of compression distinguishes a larger number of significant clauses than other significance tests. The method also appears to distinguish noise as incompressible data.
On argument strength
 THIS PAPER WILL APPEAR IN F. ZENKER (ED.), BAYESIAN ARGUMENTATION. BERLIN, HEIDELBERG: SYNTHESE LIBRARY (SPRINGER).
"... Everyday life reasoning and argumentation is defeasible and uncertain. I present a probability logic framework to rationally reconstruct everyday life reasoning and argumentation. Coherence in the sense of De Finetti is used as the basic rationality norm. I discuss two basic classes of approaches to ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Everyday life reasoning and argumentation is defeasible and uncertain. I present a probability logic framework to rationally reconstruct everyday life reasoning and argumentation. Coherence in the sense of De Finetti is used as the basic rationality norm. I discuss two basic classes of approaches to construct measures of argument strength. The first class imposes a probabilistic relation between the premises and the conclusion. The second class imposes a deductive relation. I argue for the second class, as the first class is problematic if the arguments involve conditionals. I present a measure of argument strength that allows for dealing explicitly with uncertain conditionals in the premise set.
The Babelism about Induction and Abduction
 Proc. ECAI'96 Workshop on Abductive and Inductive Reasoning
"... . The concepts of inductive and abductive reasoning lead to many open discussions. Some of them arise from confusions in the use of the terms `induction' and `abduction' that are due to the existence of numerous conflicting definitions. While both induction and abduction aim to derive hypo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
. The concepts of inductive and abductive reasoning lead to many open discussions. Some of them arise from confusions in the use of the terms `induction' and `abduction' that are due to the existence of numerous conflicting definitions. While both induction and abduction aim to derive hypotheses, such hypotheses are different in the general case. In this position paper, we show how it is difficult to reach a consensus on the relations between induction and abduction. Our contribution consists in improving the characterization of induction and abduction in a particular class. 1 INTRODUCTION The characterization of inductive and abductive reasonings led and leads to numerous debates. Many works are based on these reasonings, for example in machine learning (e.g. [7, 11]), inductive logic programming (see [14] for a survey), abductive logic programming (e.g. [16]), or resolution of diagnosis problems (see [4, 17, 3] for various approaches to abduction) . However many of these works just...
Branden Fitelson Studies in Bayesian Confirmation Theory 0 Branden Fitelson Studies in Bayesian Confirmation Theory 1 Some Bayesian Background I Studies in Bayesian Confirmation Theory
"... • Orthodox Bayesianism (i.e., Bayesian epistemology) assumes that the degrees of belief (or credence) of rational agents are (Kolmogorov [29]) probabilities. • Pra(H  K) denotes an (rational) agent a’s degree of credence in H, given the corpus K of background knowledge/evidence (called a’s “prior ” ..."
Abstract
 Add to MetaCart
(Show Context)
• Orthodox Bayesianism (i.e., Bayesian epistemology) assumes that the degrees of belief (or credence) of rational agents are (Kolmogorov [29]) probabilities. • Pra(H  K) denotes an (rational) agent a’s degree of credence in H, given the corpus K of background knowledge/evidence (called a’s “prior ” for H). • Pra(H  E & K) denotes a’s degree of credence in H (relative to K) given that (or on a’s supposition that) E. This is also the agent’s degree of belief in H (relative to K) upon learning E (called a’s “posterior ” for H, on E, given K). ∗ Credences are Kolmogorov [21], [27], [12], probabilities [50], on crisp sets [54]. ∗ Agents learn (with certainty [26]) via conditionalization [32], [33]. ∗ “Priors ” ( ∴ Bayesianism itself) are subjective [49], [47], [34], [30], [6]. • I will bracket all of these issues. The problem I’m discussing only gets worse if Bayesianism is made more sophisticated along any of these dimensions! • For simplicity, I will assume there is a single rational Bayesian probability function Pr (and I’ll drop the subscript “a ” and the background corpus “  K”).
Does a piece of evidence E support a hypothesis H equally well as H supports E?
"... Does a piece of evidence E support a hypothesis H equally well as E undermines, or countersupports, the negation of H (¬H)? ..."
Abstract
 Add to MetaCart
(Show Context)
Does a piece of evidence E support a hypothesis H equally well as E undermines, or countersupports, the negation of H (¬H)?