Results 1  10
of
11
Learning Stochastic Logic Programs
, 2000
"... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder range ..."
Abstract

Cited by 1057 (71 self)
 Add to MetaCart
Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder rangerestricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and nearmaximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing userdefined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge.
The plurality of Bayesian measures of confirmation and the problemof measure sensitivity
 Philosophy of Science 66 (Proceedings), S362–S378
, 1999
"... Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmati ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of Bayesian confirmationtheoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. 1 Preliminaries. 1.1 Terminology, Notation, and Basic Assumptions The present paper is concerned with the degree of incremental confirmation provided by evidential propositions E for hypotheses under test H, givenbackground knowledge K, according to relevance measures of degree of confirmation c. Wesaythatc is a relevance measure of degree of confirmation if and only if c satisfies the following constraints, in cases where E confirms, disconfirms, or is confirmationally irrelevant to H, given background knowledge K. 1
Distinguishing Exceptions from Noise in NonMonotonic Learning

, 1996
"... It is important for a learning program to have a reliable method of deciding whether to treat errors as noise or to include them as exceptions within a growing firstorder theory. We explore the use of an informationtheoretic measure to decide this problem within the nonmonotonic learning frame ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
It is important for a learning program to have a reliable method of deciding whether to treat errors as noise or to include them as exceptions within a growing firstorder theory. We explore the use of an informationtheoretic measure to decide this problem within the nonmonotonic learning framework defined by ClosedWorldSpecialisation. The approach adopted uses a model that consists of a reference Turing machine which accepts an encoding of a theory and proofs on its input tape and generates the observed data on the output tape. Within this model, the theory is said to "compress" data if the length of the input tape is shorter than that of the output tape. Data found to be incompressible are deemed to be "noise".
On Bayesian Measures of Evidential Support: Theoretical and Empirical Issues*
, 2006
"... Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of con ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of confirmation and we will argue that it overcomes some limitations of the currently available approaches on two grounds: (i) a theoretical analysis of the confirmation relation seen as an extension of logical deduction and (ii) an empirical comparison of competing measures in an experimental inquiry concerning inductive reasoning in a probabilistic setting. 1. Rival Bayesian Measures of Confirmation. Judgments concerning the support that a piece of information brings to a hypothesis are commonly required in scientific research as well as in other domains (medicine, law), and a major aim of a theory of inductive reasoning is to provide a proper foundation to such judgments. Within the Bayesian approach to inductive reasoning, an attempt to measure degrees of confirmation, or evidential support, should reflect, and extend, a basic qualitative view of confirmation—labeled the “clas
The Justification of Logical Theories based on Data Compression
 MACHINE INTELLIGENCE 13
, 1994
"... Nondemonstrative or inductive reasoning is a crucial component in the skills of a learner. A leading candidate for this form of reasoning involves the automatic formation of hypotheses. Initial successes in the construction of propositional theories have now been followed by algorithms that attempt ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Nondemonstrative or inductive reasoning is a crucial component in the skills of a learner. A leading candidate for this form of reasoning involves the automatic formation of hypotheses. Initial successes in the construction of propositional theories have now been followed by algorithms that attempt to generalise sentences in the predicate calculus. An important defect in these newgeneration systems is the lack of a clear model for theory justification. In this paper we describe a method of evaluating the significance of a hypothesis based on the degree to which it allows compression of the observed data with respect to prior knowledge. This can be measured by comparing the lengths of the input and output tapes of a reference Turing machine which will generate the examples from the hypothesis and a set of derivational proofs. The model extends an earlier approach of Muggleton by allowing for noise. The truth values of noisy instances are switched by making use of correction codes. The utility of compression as a significance measure is evaluated empirically in three independent domains. In particular, the results show that the existence of compression distinguishes a larger number of significant clauses than other significance tests. The method also appears to distinguish noise as incompressible data.
The Babelism about Induction and Abduction
 Proc. ECAI'96 Workshop on Abductive and Inductive Reasoning
"... . The concepts of inductive and abductive reasoning lead to many open discussions. Some of them arise from confusions in the use of the terms `induction' and `abduction' that are due to the existence of numerous conflicting definitions. While both induction and abduction aim to derive hypotheses, su ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. The concepts of inductive and abductive reasoning lead to many open discussions. Some of them arise from confusions in the use of the terms `induction' and `abduction' that are due to the existence of numerous conflicting definitions. While both induction and abduction aim to derive hypotheses, such hypotheses are different in the general case. In this position paper, we show how it is difficult to reach a consensus on the relations between induction and abduction. Our contribution consists in improving the characterization of induction and abduction in a particular class. 1 INTRODUCTION The characterization of inductive and abductive reasonings led and leads to numerous debates. Many works are based on these reasonings, for example in machine learning (e.g. [7, 11]), inductive logic programming (see [14] for a survey), abductive logic programming (e.g. [16]), or resolution of diagnosis problems (see [4, 17, 3] for various approaches to abduction) . However many of these works just...
Deductive Support I: The Classical Definition
"... • According to Classical Deductive Logical Theory (for simplicity, I’ll stick to propositional or sentential logic in these lectures, which is hard enough!): E deductively supports H iff it is impossible that both: ∗ E is true, and ∗ H is false. • I will write “E � H ” for “E deductively supports (o ..."
Abstract
 Add to MetaCart
• According to Classical Deductive Logical Theory (for simplicity, I’ll stick to propositional or sentential logic in these lectures, which is hard enough!): E deductively supports H iff it is impossible that both: ∗ E is true, and ∗ H is false. • I will write “E � H ” for “E deductively supports (or entails) H. ” The theory of (sentential) deductive support presupposes a Boolean algebra B of (atomic) propositions (a, b, c,...), closed under &, ¬, ∨, → (⊤, ⊥∈B constants). • So, E � H is shorthand for E �B H, which means that — in B — E & ¬H has the same truth table as a contradiction ⊥ = p & ¬p (some atomic p ∈ B). • This relativity of � to B is important. Consider E = “John is a bachelor ” and H = “John is unmarried”. If, in B, we can express E as ‘m & u ’ and H as ‘u’, then E �B H. But, if B is more “coarse grained ” (E = e, H = h), E �B H.
Logicism and Meaning: The Case Against
, 1995
"... This paper argues, contrary to the claims of other workers, that formal semantics in the sense of model theory cannot provide an adequate basis for the ascription of meanings in AI programs. 1 Note: The original version of this paper was written in March and April, 1991. It therefore predates ..."
Abstract
 Add to MetaCart
This paper argues, contrary to the claims of other workers, that formal semantics in the sense of model theory cannot provide an adequate basis for the ascription of meanings in AI programs. 1 Note: The original version of this paper was written in March and April, 1991. It therefore predates Meanings and Messages and Programs that Model Themselves. 2 1 Introduction AI programs are obviously about things, more explicitly so than conventional ones. A payroll program is certainly about salaries, employees and tax law, but the representations it employs, together with the ways in which those representations are manipulated, are more obviously encoded in the program code than are the representations of most AI programsone has to say `most' because there are AI programs that represent their knowledge in terms of procedures in some programming language (often LISP or as Prolog clauses). The observable behaviour of a payroll program also suggests that it is different in kind from...
On argument strength
, 2013
"... Everyday life reasoning and argumentation is defeasible and uncertain. I present a probability logic framework to rationally reconstruct everyday life reasoning and argumentation. Coherence in the sense of de Finetti is used as the basic rationality norm. I discuss two basic classes of approaches ..."
Abstract
 Add to MetaCart
Everyday life reasoning and argumentation is defeasible and uncertain. I present a probability logic framework to rationally reconstruct everyday life reasoning and argumentation. Coherence in the sense of de Finetti is used as the basic rationality norm. I discuss two basic classes of approaches to construct measures of argument strength. The first class imposes a probabilistic relation between the premises and the conclusion. The second class imposes a deductive relation. I argue for the second class, as the first class is problematic if the arguments involve conditionals. I present a measure of argument strength that allows for dealing explicitly with uncertain conditionals in the premise set. Probabilistic approaches to argumentation have become popular in various fields including argumentation theory (e.g., Hahn and Oaksford 2006), formal epistemology (e.g., Pfeifer 2007, 2008), the psychology of reasoning (e.g., Hahn and Oaksford 2007), and computer science (e.g., Haenni 2009). Probabilistic approaches allow for dealing with the uncertainty and defeasibility of everyday life arguments. This chapter presents a procedure to formalize everyday life arguments in probability logical terms and to measure their strength. “Argument ” denotes an ordered triple consisting of (i) a (possibly empty) premise set, (ii) a conclusion indicator (usually denoted by “therefore ” or “hence”), and (iii) a conclusion. As an example, consider the following argument A: (1) If Tweety is a bird, then Tweety can fly. (2) Tweety is a bird. (3) Therefore, Tweety can fly.
Does a piece of evidence E support a hypothesis H equally well as H supports E?
"... Does a piece of evidence E support a hypothesis H equally well as E undermines, or countersupports, the negation of H (¬H)? ..."
Abstract
 Add to MetaCart
Does a piece of evidence E support a hypothesis H equally well as E undermines, or countersupports, the negation of H (¬H)?