Results 1  10
of
68
Clausal Discovery
, 1997
"... The clausal discovery engine Claudien is presented. Claudien is an inductive logic programming engine that fits in the descriptive data mining paradigm. Claudien addresses characteristic induction from interpretations, a task which is related to existing formalisations of induction in logic. In ch ..."
Abstract

Cited by 197 (34 self)
 Add to MetaCart
The clausal discovery engine Claudien is presented. Claudien is an inductive logic programming engine that fits in the descriptive data mining paradigm. Claudien addresses characteristic induction from interpretations, a task which is related to existing formalisations of induction in logic. In characteristic induction from interpretations, the regularities are represented by clausal theories, and the data using Herbrand interpretations. Because Claudien uses clausal logic to represent hypotheses, the regularities induced typically involve multiple relations or predicates. Claudien also employs a novel declarative bias mechanism to define the set of clauses that may appear in a hypothesis.
Clp(bn): Constraint logic programming for probabilistic knowledge
 In Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence (UAI03
, 2003
"... Abstract. In Datalog, missing values are represented by Skolem constants. More generally, in logic programming missing values, or existentially quantified variables, are represented by terms built from Skolem functors. The CLP(BN) language represents the joint probability distribution over missing v ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
Abstract. In Datalog, missing values are represented by Skolem constants. More generally, in logic programming missing values, or existentially quantified variables, are represented by terms built from Skolem functors. The CLP(BN) language represents the joint probability distribution over missing values in a database or logic program by using constraints to represent Skolem functions. Algorithms from inductive logic programming (ILP) can be used with only minor modification to learn CLP(BN) programs. An implementation of CLP(BN) is publicly available as part of YAP Prolog at
Logical hidden markov models
 Journal of Artificial Intelligence Research
, 2006
"... Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evalu ..."
Abstract

Cited by 52 (13 self)
 Add to MetaCart
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics. 1.
TildeCRF: Conditional random fields for logical sequences
 In Proceedings of the 15th European Conference on Machine Learning (ECML06
, 2006
"... Abstract. Conditional Random Fields (CRFs) provide a powerful instrument for labeling sequences. So far, however, CRFs have only been considered for labeling sequences over flat alphabets. In this paper, we describe TildeCRF, the first method for training CRFs on logical sequences, i.e., sequences o ..."
Abstract

Cited by 39 (18 self)
 Add to MetaCart
(Show Context)
Abstract. Conditional Random Fields (CRFs) provide a powerful instrument for labeling sequences. So far, however, CRFs have only been considered for labeling sequences over flat alphabets. In this paper, we describe TildeCRF, the first method for training CRFs on logical sequences, i.e., sequences over an alphabet of logical atoms. TildeCRF’s key idea is to use relational regression trees in Dietterich et al.’s gradient tree boosting approach. Thus, the CRF potential functions are represented as weighted sums of relational regression trees. Experiments show a significant improvement over established results achieved with hidden Markov models and Fisher kernels for logical sequences. 1
Computational Logic and Human Thinking: How to be Artificially Intelligent
, 2011
"... The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought that one day machines may become more intelligent than people, many of its critics view such a prosp ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought that one day machines may become more intelligent than people, many of its critics view such a prospect with horror. Partly because these controversies attract so much attention, one of the most important accomplishments of AI has gone largely unnoticed: the fact that many of its advances can also be used directly by people, to improve their own human intelligence. Chief among these advances is Computational Logic. Computational Logic builds upon traditional logic, which was originally developed to help people think more effectively. It employs the techniques of symbolic logic, which has been used to build the foundations of mathematics and computing. However, compared with traditional logic, Computational Logic is much more powerful; and compared with symbolic logic, it is much simpler and more practical. Although the applications of Computational Logic in AI require the use of mathematical notation, its human applications do not. As a consequence, I have written the main part of this book informally, to reach as wide an audience as possible. Because human thinking is also the subject of study in many other fields, I have drawn upon related studies in Cognitive Psychology, Linguistics, Philosophy, Law, Management Science and English
Logical Bayesian Networks and their relation to other probabilistic logical models
 In Proceedings of 15th International Conference on Inductive Logic Pogramming (ILP05), volume 3625 of Lecture Notes in Artificial Intelligence
, 2005
"... We review Logical Bayesian Networks, a language for probabilistic logical modelling, and discuss its relation to Probabilistic Relational Models and Bayesian Logic Programs. 1 Probabilistic Logical Models Probabilistic logical models are models combining aspects of probability theory with aspects of ..."
Abstract

Cited by 31 (10 self)
 Add to MetaCart
(Show Context)
We review Logical Bayesian Networks, a language for probabilistic logical modelling, and discuss its relation to Probabilistic Relational Models and Bayesian Logic Programs. 1 Probabilistic Logical Models Probabilistic logical models are models combining aspects of probability theory with aspects of Logic Programming, firstorder logic or relational languages. Recently a variety of languages to describe such models has been introduced. For some languages techniques exist to learn such models from data. Two examples are Probabilistic Relational Models (PRMs) [4] and Bayesian Logic Programs (BLPs) [5]. These two languages are probably the most popular and wellknown in the Relational Data Mining community. We introduce a new language, Logical Bayesian Networks (LBNs) [2], that is strongly related to PRMs and BLPs yet solves some of their problems with respect to knowledge representation (related to expressiveness and intuitiveness). PRMs, BLPs and LBNs all follow the principle of Knowledge Based Model Construction: they offer a language that can be used to specify general probabilistic logical knowledge and they provide a methodology to construct a propositional model based on this knowledge when given a specific
L: Logical and Relational Learning
"... Abstract. Statistical relational learning (SRL) addresses one of the central open questions of AI: the combination of relational or firstorder logic with principled probabilistic and statistical approaches to inference and learning. This thesis approaches SRL from an inductive logic programming (IL ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Statistical relational learning (SRL) addresses one of the central open questions of AI: the combination of relational or firstorder logic with principled probabilistic and statistical approaches to inference and learning. This thesis approaches SRL from an inductive logic programming (ILP) perspective and starts with developing a general framework for SRL: probabilistic ILP. Based on this foundation, the thesis shows how to incorporate the logical concepts of objects and relations among these objects into Bayesian networks. As time and actions are not just other relations, it afterwards develops approaches to probabilistic ILP over time and for making complex decision in relational domains. Finally, it is shown that SRL approaches naturally yield kernels for structured data. The resulting approaches are illustrated using examples from genetics, bioinformatics, and planning domains.
nFOIL: Integrating Naïve Bayes and FOIL
, 2005
"... We present the system nFOIL. It tightly integrates the naïve Bayes learning scheme with the inductive logic programming rulelearner FOIL. In contrast to previous combinations, which have employed naïve Bayes only for postprocessing the rule sets, nFOIL employs the naïve Bayes criterion to directly ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
We present the system nFOIL. It tightly integrates the naïve Bayes learning scheme with the inductive logic programming rulelearner FOIL. In contrast to previous combinations, which have employed naïve Bayes only for postprocessing the rule sets, nFOIL employs the naïve Bayes criterion to directly guide its search. Experimental evidence shows that nFOIL performs better than both its base line algorithm FOIL or the postprocessing approach, and is at the same time competitive with more sophisticated approaches.
Protocols from perceptual observations
 Artificial Intelligence
, 2005
"... This paper presents a cognitive vision system capable of autonomously learning protocols from perceptual observations of dynamic scenes. The work is motivated by the aim of creating a synthetic agent that can observe a scene containing interactions between unknown objects and agents, and learn model ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
(Show Context)
This paper presents a cognitive vision system capable of autonomously learning protocols from perceptual observations of dynamic scenes. The work is motivated by the aim of creating a synthetic agent that can observe a scene containing interactions between unknown objects and agents, and learn models of these sufficient to act in accordance with the implicit protocols present in the scene. Discrete concepts (utterances and object properties), and temporal protocols involving these concepts, are learned in an unsupervised manner from continuous sensor input alone. Crucial to this learning process are methods for spatiotemporal attention applied to the audio and visual sensor data. These identify subsets of the sensor data relating to discrete concepts. Clustering within continuous feature spaces is used to learn object property and utterance models from processed sensor data, forming a symbolic description. The progol Inductive Logic Programming system is subsequently used to learn symbolic models of the temporal protocols presented in the presence of noise and overrepresentation in the symbolic data input to it. The models learned are used to drive a synthetic agent that can interact with the world in a seminatural way. The system has been evaluated in the domain of tabletop game playing and has been shown to be successful at learning protocol behaviours in such realworld audiovisual environments. Key words: cognitive vision, autonomous learning, unsupervised clustering, symbol grounding, inductive logic programming, spatiotemporal reasoning ∗ Corresponding author.
K.: Parameter learning in probabilistic databases: A least squares approach
, 2008
"... Abstract. We introduce the problem of learning the parameters of the probabilistic database ProbLog. Given the observed success probabilities of a set of queries, we compute the probabilities attached to facts that have a low approximation error on the training examples as well as on unseen examples ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce the problem of learning the parameters of the probabilistic database ProbLog. Given the observed success probabilities of a set of queries, we compute the probabilities attached to facts that have a low approximation error on the training examples as well as on unseen examples. Assuming Gaussian error terms on the observed success probabilities, this naturally leads to a least squares optimization problem. Our approach, called LeProbLog, is able to learn both from queries and from proofs and even from both simultaneously. This makes it flexible and allows faster training in domains where the proofs are available. Experiments on real world data show the usefulness and effectiveness of this least squares calibration of probabilistic databases. 1