Results 1  10
of
18
Lifted Aggregation in Directed Firstorder Probabilistic Models
"... As exact inference for firstorder probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference algorithms for such models. This paper discusses directed firstorder models that require an aggregation operator ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
As exact inference for firstorder probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference algorithms for such models. This paper discusses directed firstorder models that require an aggregation operator when a parent random variable is parameterized by logical variables that are not present in a child random variable. We introduce a new data structure, aggregation parfactors, to describe aggregation in directed firstorder models. We show how to extend Milch et al.’s CFOVE algorithm to perform lifted inference in the presence of aggregation parfactors. We also show that there are cases where the polynomial time complexity (in the domain size of logical variables) of the CFOVE algorithm can be reduced to logarithmic time complexity using aggregation parfactors. 1
LogLinear Description Logics
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... Loglinear description logics are a family of probabilistic logics integrating various concepts and methods from the areas of knowledge representation and reasoning and statistical relational AI. We define the syntax and semantics of loglinear description logics, describe a convenient representatio ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Loglinear description logics are a family of probabilistic logics integrating various concepts and methods from the areas of knowledge representation and reasoning and statistical relational AI. We define the syntax and semantics of loglinear description logics, describe a convenient representation as sets of firstorder formulas, and discuss computational and algorithmic aspects of probabilistic queries in the language. The paper concludes with an experimental evaluation of an implementation of a loglinear DL reasoner.
ILP turns 20  Biography and future challenges
 MACH LEARN
, 2011
"... Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characteri ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characterised by an attempt to extend theory and implementations in tandem with the development of novel and challenging realworld applications. Lastly, by projection we suggest directions for research which will help the subject coming of age.
Query Answering in Belief Logic Programming ⋆
"... Abstract. In this paper we introduce a fixpoint semantics for quantitative logic programming, which is able to both combine and correlate evidence from different sources of information. Based on this semantics, we develop efficient algorithms that can answer queries for nonground programs with the ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. In this paper we introduce a fixpoint semantics for quantitative logic programming, which is able to both combine and correlate evidence from different sources of information. Based on this semantics, we develop efficient algorithms that can answer queries for nonground programs with the help of an SLDlike procedure. We also analyze the computational complexity of the algorithms and illustrate their uses. 1
Belief logic programming: Uncertainty reasoning with correlation of evidence
 In Intl. Conf. on Logic Programming and Nonmonotonic Reasoning (LPNMR
, 2009
"... Abstract. Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate evidence obtained from nonindependent information sources. BLP has nonmonotonic semantics bas ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate evidence obtained from nonindependent information sources. BLP has nonmonotonic semantics based on the concepts of belief combination functions and is inspired by DempsterShafer theory of evidence. Most importantly, unlike the previous efforts to integrate uncertainty and logic programming, BLP can correlate structural information contained in rules and provides more accurate certainty estimates. The results are illustrated via simple, yet realistic examples of rulebased Web service integration. 1
Programs from Interpretations
, 2010
"... ProbLog is a recently introduced probabilistic extension of the logic programming language Prolog, in which facts can be annotated with the probability that they hold. The advantage of this probabilistic language is that it naturally expresses a generative process over interpretations using a declar ..."
Abstract
 Add to MetaCart
ProbLog is a recently introduced probabilistic extension of the logic programming language Prolog, in which facts can be annotated with the probability that they hold. The advantage of this probabilistic language is that it naturally expresses a generative process over interpretations using a declarative model. Interpretations are relational descriptions or possible worlds. In this paper, a novel parameter estimation algorithm CoPrEM for learning ProbLog programs from partial interpretations is introduced. The algorithm is essentially a SoftEM algorithm that computes binary decision diagrams for each interpretation allowing for a dynamic programming approach to be implemented. The CoPrEM algorithm has been experimentally evaluated on a number of data sets, which justify the approach and show its effectiveness.
AILog User Manual Version 2.3
, 2008
"... This manual describes AILog, (formerly CILog), a simple representation and reasoning system based on the books Artificial Intelligence: foundations of computational agents [Poole and Mackworth, 2009] and Computational Intelligence: A Logical Approach Poole, Mackworth, and Goebel [1998], and the Inde ..."
Abstract
 Add to MetaCart
This manual describes AILog, (formerly CILog), a simple representation and reasoning system based on the books Artificial Intelligence: foundations of computational agents [Poole and Mackworth, 2009] and Computational Intelligence: A Logical Approach Poole, Mackworth, and Goebel [1998], and the Independent Choice Logic [Poole, 2008] for the probabilistic reasoning. AILog provides: • a definite clause representation and reasoning system • a simple tellask user interface, where the user can tell the system facts and ask questions of the system • explanation facilities to explain how a goal was proved, why an answer couldn’t be found, why a question was asked, why an errorproducing goal was called, and why the depthbound was reached • knowledgelevel debugging tools, that let the user debug incorrect answers, missing answers, why the system asks a question, system errors, and possible infinite loops • depthbounded search, that can be used to investigate potential infinite loops and used to build an iterativedeepening search procedure
Brain and Cognitive Sciences
"... We describe a general method of transforming arbitrary programming languages into probabilistic programming languages with straightforward MCMC inference engines. Random choices in the program are “named ” with information about their position in an execution trace; these names are used in conjuncti ..."
Abstract
 Add to MetaCart
We describe a general method of transforming arbitrary programming languages into probabilistic programming languages with straightforward MCMC inference engines. Random choices in the program are “named ” with information about their position in an execution trace; these names are used in conjunction with a database holding values of random variables to implement MCMC inference in the space of execution traces. We encode naming information using lightweight sourcetosource compilers. Our method enables us to reuse existing infrastructure (compilers, profilers, etc.) with minimal additional code, implying fast models with low development overhead. We illustrate the technique on two languages, one functional and one imperative: Bher, a compiled version of the Church language which eliminates interpretive overhead of the original MITChurch implementation, and Stochastic Matlab, a new opensource language. 1
Probabilistic programs, computability, and de Finetti measures
"... The complexity of probabilistic models, especially those involving recursion, has far exceeded the representational capacity of graphical models. Functional programming languages with probabilistic choice operators have recently been proposed as universal representations for statistical modeling (e. ..."
Abstract
 Add to MetaCart
The complexity of probabilistic models, especially those involving recursion, has far exceeded the representational capacity of graphical models. Functional programming languages with probabilistic choice operators have recently been proposed as universal representations for statistical modeling (e.g., IBAL [Pfe01], λ ◦ [PPT08], Church [GMR + 08]). The conditional independence structure of a probabilistic program is not, in general, representable by a graphical model. Rather, it is dynamic and is given by the random control and data flow of the program. These functional probabilistic languages are allied with imperative probabilistic languages (e.g., Infer.NET) and a similar tradition of augmenting logical representations with probabilistic quantifiers (e.g., BLOG [MMR+ 05],
Belief Logic Programming and its Extensions ∗
, 2009
"... Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate evidence obtained from nonindependent information sources. BLP has nonmonotonic semantics based on the ..."
Abstract
 Add to MetaCart
Belief Logic Programming (BLP) is a novel form of quantitative logic programming in the presence of uncertain and inconsistent information, which was designed to be able to combine and correlate evidence obtained from nonindependent information sources. BLP has nonmonotonic semantics based on the concepts of belief combination functions and DempsterShafer theory of evidence. Most importantly, unlike the previous efforts to integrate uncertainty and logic programming, BLP can correlate structural information contained in rules and provides more accurate certainty estimates. Declarative semantics is provided as well as query evaluation algorithms. Also BLP is extended to to programs with cycles and to correlated base facts. The results are illustrated via simple, yet realistic examples of rulebased Web service integration.