Results 1  10
of
20
F.: Experimentation of an expectation maximization algorithm for probabilistic logic programs
 Intelligenza Artificiale
, 2012
"... Statistical Relational Learning and Probabilistic Inductive Logic Programming are two emerging fields that use representation languages able to combine logic and probability. In the field of Logic Programming, the distribution semantics is one of the prominent approaches for representing uncertainty ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Statistical Relational Learning and Probabilistic Inductive Logic Programming are two emerging fields that use representation languages able to combine logic and probability. In the field of Logic Programming, the distribution semantics is one of the prominent approaches for representing uncertainty and underlies many languages such as ICL, PRISM, ProbLog and LPADs. Learning the parameters for such languages requires an Expectation Maximization algorithm since their equivalent Bayesian networks contain hidden variables. EMBLEM (EM over BDDs for probabilistic Logic programs Efficient Mining) is an EM algorithm for languages following the distribution semantics that computes expectations directly on the Binary Decision Diagrams that are built for inference. In this paper we present experiments comparing EMBLEM with LeProbLog, Alchemy, CEM, RIB and LFIProbLog on six real world datasets. The results show that EMBLEM is able to solve problems on which the other systems fail and it often achieves significantly higher areas under the Precision Recall and the ROC curves in a similar time.
F.: Learning the structure of probabilistic logic programs
 ILP 2011. LNCS
, 2012
"... Abstract. There is a growing interest in the field of Probabilistic Inductive Logic Programming, which uses languages that integrate logic programming and probability. Many of these languages are based on the distribution semantics and recently various authors have proposed systems for learning the ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Abstract. There is a growing interest in the field of Probabilistic Inductive Logic Programming, which uses languages that integrate logic programming and probability. Many of these languages are based on the distribution semantics and recently various authors have proposed systems for learning the parameters (PRISM, LeProbLog, LFIProbLog and EMBLEM) or both the structure and the parameters (SEMCPlogic) of these languages. EMBLEM for example uses an Expectation Maximization approach in which the expectations are computed on Binary Decision Diagrams. In this paper we present the algorithm SLIPCASE for “Structure LearnIng of ProbabilistiC logic progrAmS with Em over bdds”. It performs a beam search in the space of the language of Logic Programs with Annotated Disjunctions (LPAD) using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood of theory refinements it performs a limited number of Expectation Maximization iterations of EMBLEM. SLIPCASE has been tested on three realworld datasetsandcomparedwithSEMCPlogic andLearningusing Structural Motifs, an algorithm for Markov Logic Networks. The results show that SLIPCASE achieves higher areas under the precisionrecall and ROC curves and is more scalable.
R.: Parameter learning for probabilistic ontologies
 In: International Conference on Web Reasoning and Rule Systems. LNCS
, 2013
"... Abstract. Recently, the problem of representing uncertainty in Description Logics (DLs) has received an increasing attention. In probabilistic DLs, axioms contain numeric parameters that are often difficult to specify or to tune for a human. In this paper we present an approach for learning and tuni ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, the problem of representing uncertainty in Description Logics (DLs) has received an increasing attention. In probabilistic DLs, axioms contain numeric parameters that are often difficult to specify or to tune for a human. In this paper we present an approach for learning and tuning the parameters of probabilistic ontologies from data. The resulting algorithm, called EDGE, is targeted to DLs following the DISPONTE approach, that applies the distribution semantics to DLs. 1
Structure learning of probabilistic logic programs by searching the clause space
 CoRR/arXiv:1309.2080
, 2013
"... ar ..."
Probabilistic Ontologies in Datalog+/
"... Abstract. In logic programming the distribution semantics is one of the most popular approaches for dealing with uncertain information. In this paper we apply the distribution semantics to the Datalog+/ language that is grounded in logic programming and allows tractable ontology querying. In the re ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In logic programming the distribution semantics is one of the most popular approaches for dealing with uncertain information. In this paper we apply the distribution semantics to the Datalog+/ language that is grounded in logic programming and allows tractable ontology querying. In the resulting semantics, called DISPONTE, formulas of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represents a degree of confidence in the formula, while the statistical probability considers the populations to which the formula is applied. The probability of a query is defined in terms of finite set of finite explanations for the query. We also compare the DISPONTE approach for Datalog+/ ontologies with that of Probabilistic Datalog+/ where an ontology is composed of a Datalog+/theory whose formulas are associated to an assignment of values for the random variables of a companion Markov Logic Network. 1
A Description Logics Tableau Reasoner in Prolog
"... Abstract. Description Logics (DLs) are gaining a widespread adoption as the popularity of the Semantic Web increases. Traditionally, reasoning algorithms for DLs have been implemented in procedural languages such as Java or C++. In this paper, we present the system TRILL for “Tableau Reasoner for de ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Description Logics (DLs) are gaining a widespread adoption as the popularity of the Semantic Web increases. Traditionally, reasoning algorithms for DLs have been implemented in procedural languages such as Java or C++. In this paper, we present the system TRILL for “Tableau Reasoner for descrIption Logics in proLog”. TRILL answers queries to SHOIN (D) knowledge bases using a tableau algorithm. Prolog nondeterminism is used for easily handling nondeterministic expansion rules that produce more than one tableau. Moreover, given a query, TRILL is able to return instantiated explanations for the query, i.e., instantiated minimal sets of axioms that allow the entailment of the query. The Thea2 library is exploited by TRILL for parsing ontologies and for the internal Prolog representation of DL axioms.
Speeding Up Inference for Probabilistic Logic Programs
"... Probabilistic Logic Programming (PLP) allows to represent domains containing many entities connected by uncertain relations and has many applications in particular in Machine Learning. PITA is a PLP algorithm for computing the probability of queries that exploits tabling, answer subsumption and Bina ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Probabilistic Logic Programming (PLP) allows to represent domains containing many entities connected by uncertain relations and has many applications in particular in Machine Learning. PITA is a PLP algorithm for computing the probability of queries that exploits tabling, answer subsumption and Binary Decision Diagrams (BDDs). PITA does not impose any restriction on the programs. Other algorithms, such as PRISM, reduce computation time by imposing restrictions on the program, namely that subgoals are independent and that clause bodies are mutually exclusive. Another assumption that simplifies inference is that clause bodies are independent. In this paper we present the algorithms PITA(IND,IND) and PITA(OPT). PITA(IND,IND) assumes that subgoals and clause bodies are independent. PITA(OPT) instead first checks whether these assumptions hold for subprograms and subgoals: if they do, PITA(OPT) uses a simplified calculation, otherwise it resorts to BDDs. Experiments on a number of benchmark datasets show that PITA(IND,IND) is the fastest on datasets respecting the assumptions while PITA(OPT) is a good option when nothing is known about a dataset.
Learning the Parameters of Probabilistic Description Logics
"... Abstract. Uncertain information is ubiquitous in the Semantic Web, due to methods used for collecting data and to the inherently distributed nature of the data sources. It is thus very important to develop probabilistic Description Logics (DLs) so that the uncertainty is directly represented and m ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Uncertain information is ubiquitous in the Semantic Web, due to methods used for collecting data and to the inherently distributed nature of the data sources. It is thus very important to develop probabilistic Description Logics (DLs) so that the uncertainty is directly represented and managed at the language level. The DISPONTE semantics for probabilistic DLs applies the distribution semantics of probabilistic logic programming to DLs. In DISPONTE, axioms are labeled with numeric parameters representing their probability. These are often difficult to specify or to tune for a human. On the other hand, data is usually available that can be leveraged for setting the parameters. In this paper, we present EDGE that learns the parameters of DLs following the DISPONTE semantics. EDGE is an EM algorithm in which the required expectations are computed directly on the binary decision diagrams that are built for inference. Experiments on two datasets show that EDGE achieves higher areas under the Precision Recall and ROC curves than an association rule learner in a comparable or smaller time. 1
Learning probabilistic description logics. In:
 URSW III. LNCS,
, 2014
"... Abstract. We consider the problem of learning both the structure and the parameters of Probabilistic Description Logics under the DISPONTE semantics. DISPONTE is based on the distribution semantics for Probabilistic Logic Programming and assigns a probability to assertional and terminological axiom ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of learning both the structure and the parameters of Probabilistic Description Logics under the DISPONTE semantics. DISPONTE is based on the distribution semantics for Probabilistic Logic Programming and assigns a probability to assertional and terminological axioms. The system EDGE, given a DISPONTE knowledge base (KB) and sets of positive and negative examples in the form of concept assertions, returns the value of the probabilities associated with axioms. We present the system LEAP that learns both the structure and the parameters of DISPONTE KBs explotiting EDGE. LEAP is based on the system CELOE for ontology engineering and exploits its search strategy in the space of possible axioms. LEAP uses the axioms returned by CELOE to build a KB so that the likelihood of the examples is maximized. We present experiments showing the potential of EDGE and LEAP.
Tableau Reasoners for Probabilistic Ontologies Exploiting Logic Programming Techniques
"... Abstract. The adoption of Description Logics for modeling real world domains within the Semantic Web is exponentially increased in the last years, also due to the availability of a large number of reasoning algorithms. Most of them exploit the tableau algorithm which has to manage nondeterminism, ..."
Abstract
 Add to MetaCart
Abstract. The adoption of Description Logics for modeling real world domains within the Semantic Web is exponentially increased in the last years, also due to the availability of a large number of reasoning algorithms. Most of them exploit the tableau algorithm which has to manage nondeterminism, a feature that is not easy to handle using procedural languages such as Java or C++. Reasoning on real world domains also requires the capability of managing probabilistic and uncertain information. We thus present TRILL, for "Tableau Reasoner for descrIption Logics in proLog" and TRILL P , for "TRILL powered by Pinpointing formulas", which implement the tableau algorithm and return the probability of queries. TRILL P , instead of the set of explanations for a query, computes a Boolean formula representing them, speeding up the computation.