Results 11  20
of
40
Using Bayesian Classifiers to Combine Rules
 In Working Notes of MRDM04
, 2004
"... Abstract. One of the most popular techniques for multirelational data mining is Inductive Logic Programming (ILP). Given a set of positive and negative examples, an ILP system ideally finds a logical description of the underlying data model that discriminates the positive examples from the negative ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Abstract. One of the most popular techniques for multirelational data mining is Inductive Logic Programming (ILP). Given a set of positive and negative examples, an ILP system ideally finds a logical description of the underlying data model that discriminates the positive examples from the negative examples. However, in multirelational data mining, one often has to deal with erroneous and missing information. ILP systems can still be useful by generating rules that captures the main relationships in the system. An important question is how to combine these rules to form an accurate classifier. An interesting approach to this problem is to use Bayes Net based classifiers. We compare Naïve Bayes, Tree Augmented Naïve Bayes (TAN) and the Sparse Candidate algorithm to a voting classifier. We also show that a full classifier can be implemented as a CLP(BN) program [14], giving some insight on how to pursue further improvements. 1
ALLPAD: Approximate learning of logic programs with annotated disjunctions (Tech. Rep
, 2006
"... Abstract. Logic Programs with Annotated Disjunctions (LPADs) provide a simple and elegant framework for representing probabilistic knowledge in logic programming. In this paper I consider the problem of learning ground LPADs starting from a set of interpretations annotated with their probability. I ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
Abstract. Logic Programs with Annotated Disjunctions (LPADs) provide a simple and elegant framework for representing probabilistic knowledge in logic programming. In this paper I consider the problem of learning ground LPADs starting from a set of interpretations annotated with their probability. I present the system ALLPAD for solving this problem. ALLPAD modifies the previous system LLPAD in order to tackle real world learning problems more effectively. This is achieved by looking for an approximate solution rather than a perfect one. ALLPAD has been tested on the problem of classifying proteins according to their tertiary structures and the results compare favorably with most other approaches. 1
A Survey of FirstOrder Probabilistic Models
, 2008
"... There has been a long standing division in Artificial Intelligence between logical and probabilistic reasoning approaches. While probabilistic models can deal well with inherent uncertainty in many realworld domains, they operate on a mostly propositional level. Logic systems, on the other hand, c ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
There has been a long standing division in Artificial Intelligence between logical and probabilistic reasoning approaches. While probabilistic models can deal well with inherent uncertainty in many realworld domains, they operate on a mostly propositional level. Logic systems, on the other hand, can deal with much richer representations, especially firstorder ones, but treat uncertainty only in limited ways. Therefore, an integration of these types of inference is highly desirable, and many approaches have been proposed, especially from the 1990s on. These solutions come from many different subfields and vary greatly in language, features and (when available at all) inference algorithms. Therefore their relation to each other is not always clear, as well as their semantics. In this survey, we present the main aspects of the solutions proposed and group them according to language, semantics and inference algorithm. In doing so, we draw relations between them and discuss particularly important choices and tradeoffs.
ILP turns 20  Biography and future challenges
 MACH LEARN
, 2011
"... Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characteri ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characterised by an attempt to extend theory and implementations in tandem with the development of novel and challenging realworld applications. Lastly, by projection we suggest directions for research which will help the subject coming of age.
Generative Modeling by PRISM
"... Abstract. PRISM is a probabilistic extension of Prolog. It is a high level language for probabilistic modeling capable of learning statistical parameters from observed data. After reviewing it from various viewpoints, we examine some technical details related to logic programming, including semantic ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. PRISM is a probabilistic extension of Prolog. It is a high level language for probabilistic modeling capable of learning statistical parameters from observed data. After reviewing it from various viewpoints, we examine some technical details related to logic programming, including semantics, search and program synthesis. 1
Sampling First Order Logical Particles
"... Approximate inference in dynamic systems is the problem of estimating the state of the system given a sequence of actions and partial observations. High precision estimation is fundamental in many applications like diagnosis, natural language processing, tracking, planning, and robotics. In this pap ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Approximate inference in dynamic systems is the problem of estimating the state of the system given a sequence of actions and partial observations. High precision estimation is fundamental in many applications like diagnosis, natural language processing, tracking, planning, and robotics. In this paper we present an algorithm that samples possible deterministic executions of a probabilistic sequence. The algorithm takes advantage of a compact representation (using first order logic) for actions and world states to improve the precision of its estimation. Theoretical and empirical results show that the algorithm’s expected error is smaller than propositional sampling and Sequential Monte Carlo (SMC) sampling techniques. 1
Probabilistic Inference over Image Networks
"... Abstract. Digital Libraries contain collections of multimedia objects providing services for the management, sharing and retrieval. Involved objects have two levels of complexity: the former refers to the inner object complexity while the latter takes into account the implicit/explicit relationships ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. Digital Libraries contain collections of multimedia objects providing services for the management, sharing and retrieval. Involved objects have two levels of complexity: the former refers to the inner object complexity while the latter takes into account the implicit/explicit relationships among objects. Traditional machine learning classifiers do not consider the relationships among objects assuming them independent and identically distributed. Recently, linkbased classification methods have been proposed, that try to classify objects exploiting their relationships (links). In this paper, we deal with objects corresponding to digital images, even if the proposed approach can be naturally applied to different kind of multimedia objects. Relationships can be expressed among the features of the same image or among features belonging to different images. The aim of this work is to verify whether a linkbased classifier based on a Statistical Relational Learning (SRL) language can improve the accuracy of a classical knearest neighbour approach. Experiments will show that the modelling of the relationships in a realword dataset using a SRL model reduces the classification error. 1
Probabilistic logical models for mendel’s experiments: An exercise
 In Inductive Logic Programming (ILP 2004), Work in Progress Track
, 2004
"... Abstract. Several probabilistic logical modelling languages are compared on the task of describing or learning the inheritance mechanism discovered by Mendel. This small exercise reveals differences with respect to how easily certain kinds of domain knowledge (which may improve the learnability of t ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Several probabilistic logical modelling languages are compared on the task of describing or learning the inheritance mechanism discovered by Mendel. This small exercise reveals differences with respect to how easily certain kinds of domain knowledge (which may improve the learnability of the model) can be expressed by them. 1
CHRiSM: CHance Rules induce Statistical Models
 In: Proceedings of the Sixth International Workshop on Constraint Handling Rules
, 2009
"... Abstract. A new probabilisticlogic formalism, called CHRiSM, is introduced. CHRiSM is based on a combination of CHR and PRISM. It can be used for highlevel rapid prototyping of complex statistical models by means of chance rules. The underlying PRISM system can then be used for several probabilist ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. A new probabilisticlogic formalism, called CHRiSM, is introduced. CHRiSM is based on a combination of CHR and PRISM. It can be used for highlevel rapid prototyping of complex statistical models by means of chance rules. The underlying PRISM system can then be used for several probabilistic inference tasks, including parameter learning. We describe a sourcetosource transformation from CHRiSM rules to PRISM, via CHR(PRISM). Finally we discuss the relation between CHRiSM and probabilistic logic programming, in particular, CPlogic. 1
Reasoning with Recursive Loops under the PLP Framework
"... Recursive loops in a logic program present a challenging problem to the PLP (Probabilistic Logic Programming) framework. On the one hand, they loop forever so that the PLP backwardchaining inferences would never stop. On the other hand, they may generate cyclic influences, which are disallowed in B ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Recursive loops in a logic program present a challenging problem to the PLP (Probabilistic Logic Programming) framework. On the one hand, they loop forever so that the PLP backwardchaining inferences would never stop. On the other hand, they may generate cyclic influences, which are disallowed in Bayesian networks. Therefore, in existing PLP approaches logic programs with recursive loops are considered to be problematic and thus are excluded. In this paper, we propose a novel solution to this problem by making use of recursive loops to build a stationary dynamic Bayesian network. We introduce a new PLP formalism, called a Bayesian knowledge base. It allows recursive loops and contains logic clauses of the form A ← A1,..., Al, true, Context, T ypes, which naturally formulates the knowledge that the Ais have direct influences on A in the context Context under the type constraints Types. We use the wellfounded model of a logic program to define the direct influence relation and apply SLGresolution to compute the space of random variables together with their parental connections. This establishes a clear declarative semantics for a Bayesian knowledge base. We view a logic program with recursive loops as a special temporal model, where backwardchaining cycles of the form A ←...A ←... are interpreted as feedbacks. This extends existing PLP approaches, which mainly aim at (nontemporal) relational models.