Results 1 
7 of
7
Markov logic in infinite domains
 In Proc. UAI07
, 2007
"... Combining firstorder logic and probability has long been a goal of AI. Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching weights to firstorder formulas and viewing them as templates for features of Markov networks. Unfortunately, it does not have the full power of first ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Combining firstorder logic and probability has long been a goal of AI. Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching weights to firstorder formulas and viewing them as templates for features of Markov networks. Unfortunately, it does not have the full power of firstorder logic, because it is only defined for finite domains. This paper extends Markov logic to infinite domains, by casting it in the framework of Gibbs measures (Georgii, 1988). We show that a Markov logic network (MLN) admits a Gibbs measure as long as each ground atom has a finite number of neighbors. Many interesting cases fall in this category. We also show that an MLN admits a unique measure if the weights of its nonunit clauses are small enough. We then examine the structure of the set of consistent measures in the nonunique case. Many important phenomena, including systems with phase transitions, are represented by MLNs with nonunique measures. We relate the problem of satisfiability in firstorder logic to the properties of MLN measures, and discuss how Markov logic relates to previous infinite models. 1
Modeltheoretic expressivity analysis
 In Probabilistic Inductive Logic Programming
"... In the preceding chapter the problem of comparing languages was considered from a behavioral perspective. In this chapter we develop an alternative, modeltheoretic approach. In this approach we compare the expressiveness of probabilisticlogic (pl) ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In the preceding chapter the problem of comparing languages was considered from a behavioral perspective. In this chapter we develop an alternative, modeltheoretic approach. In this approach we compare the expressiveness of probabilisticlogic (pl)
Compatibility formalization between PROWL and OWL
 In Proceedings of the First International Workshop on Uncertainty in Description Logics (UniDL) on Federated Logic Conference (FLoC) 2010
, 2010
"... Abstract. As stated in [5], a major design goal for PROWL was to attain compatibility with OWL. However, this goal has been only partially achieved as yet, primarily due to several key issues not fully addressed in the original work. This paper describes several important issues of compatibility be ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. As stated in [5], a major design goal for PROWL was to attain compatibility with OWL. However, this goal has been only partially achieved as yet, primarily due to several key issues not fully addressed in the original work. This paper describes several important issues of compatibility between PROWL and OWL, and suggests approaches to deal with them. To illustrate the issues and how they can be addressed, we use procurement fraud as an example application domain [2]. First, we describe the lack of mapping between PROWL random variables (RVs) and the concepts defined in OWL, and then show how this mapping can be done. Second, we describe PROWL’s lack of compatibility with existing types already present in OWL, and then show how every type defined in PROWL can be directly mapped to concepts already present in OWL.
Epistemic and Statistical Probabilistic
"... Abstract. We present DISPONTE, a semantics for probabilistic ontologies that is based on the distribution semantics for probabilistic logic programs. In DISPONTE the axioms of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represen ..."
Abstract
 Add to MetaCart
Abstract. We present DISPONTE, a semantics for probabilistic ontologies that is based on the distribution semantics for probabilistic logic programs. In DISPONTE the axioms of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represents a degree of confidence in the axiom, while the statistical probability considers the populations to which the axiom is applied. 1
Probabilistic Ontologies in Datalog+/
"... Abstract. In logic programming the distribution semantics is one of the most popular approaches for dealing with uncertain information. In this paper we apply the distribution semantics to the Datalog+/ language that is grounded in logic programming and allows tractable ontology querying. In the re ..."
Abstract
 Add to MetaCart
Abstract. In logic programming the distribution semantics is one of the most popular approaches for dealing with uncertain information. In this paper we apply the distribution semantics to the Datalog+/ language that is grounded in logic programming and allows tractable ontology querying. In the resulting semantics, called DISPONTE, formulas of a probabilistic ontology can be annotated with an epistemic or a statistical probability. The epistemic probability represents a degree of confidence in the formula, while the statistical probability considers the populations to which the formula is applied. The probability of a query is defined in terms of finite set of finite explanations for the query. We also compare the DISPONTE approach for Datalog+/ ontologies with that of Probabilistic Datalog+/ where an ontology is composed of a Datalog+/theory whose formulas are associated to an assignment of values for the random variables of a companion Markov Logic Network. 1
Abstract
"... Combining firstorder logic and probability has long been a goal of AI. Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching weights to firstorder formulas and viewing them as templates for features of Markov networks. Unfortunately, it does not have the full power of first ..."
Abstract
 Add to MetaCart
Combining firstorder logic and probability has long been a goal of AI. Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching weights to firstorder formulas and viewing them as templates for features of Markov networks. Unfortunately, it does not have the full power of firstorder logic, because it is only defined for finite domains. This paper extends Markov logic to infinite domains, by casting it in the framework of Gibbs measures (Georgii, 1988). We show that a Markov logic network (MLN) admits a Gibbs measure as long as each ground atom has a finite number of neighbors. Many interesting cases fall in this category. We also show that an MLN admits a unique measure if the weights of its nonunit clauses are small enough. We then examine the structure of the set of consistent measures in the nonunique case. Many important phenomena, including systems with phase transitions, are represented by MLNs with nonunique measures. We relate the problem of satisfiability in firstorder logic to the properties of MLN measures, and discuss how Markov logic relates to previous infinite models. 1
The Reusable Symbol Problem A position paper for NeSy’08
"... Abstract. Examining the major differences between how traditional programs compute and our current understanding of how brains compute, I see only one key gap in our ability to carry out logical reasoning within neural networks using standard methods. I refer to this gap as the reusable symbol probl ..."
Abstract
 Add to MetaCart
Abstract. Examining the major differences between how traditional programs compute and our current understanding of how brains compute, I see only one key gap in our ability to carry out logical reasoning within neural networks using standard methods. I refer to this gap as the reusable symbol problem: How can neural systems use multiplyinstantiatable symbols to represent arbitrary objects? This problem is fundamental and cannot be readily decomposed into simpler problems. Solving this problem would solve many individual problems such as the problem of representing relations between objects represented as neural activation patterns, the problem of implementing grammars in neural networks, and the wellknown binding problem [3] for neural models. In this paper I discuss the use of reusable symbols and I give a concrete simple canonical example of the reusable symbol problem.