Results 1  10
of
30
Markov Logic Networks
 Machine Learning
, 2006
"... Abstract. We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects ..."
Abstract

Cited by 569 (34 self)
 Add to MetaCart
Abstract. We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a firstorder formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudolikelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a realworld database and knowledge base in a university domain illustrate the promise of this approach.
Markov Logic: A Unifying Framework for Statistical Relational Learning
 PROCEEDINGS OF THE ICML2004 WORKSHOP ON STATISTICAL RELATIONAL LEARNING AND ITS CONNECTIONS TO OTHER FIELDS
, 2004
"... Interest in statistical relational learning (SRL) has grown rapidly in recent years. Several key SRL tasks have been identified, and a large number of approaches have been proposed. Increasingly, a ..."
Abstract

Cited by 75 (0 self)
 Add to MetaCart
Interest in statistical relational learning (SRL) has grown rapidly in recent years. Several key SRL tasks have been identified, and a large number of approaches have been proposed. Increasingly, a
Approximate Inference for FirstOrder Probabilistic Languages
 In Proc. International Joint Conference on Artificial Intelligence
, 2001
"... A new, general approach is described for approximate inference in firstorder probabilistic languages, using Markov chain Monte Carlo (MCMC) techniques in the space of concrete possible worlds underlying any given knowledge base. The simplicity of the approach and its lazy construction of poss ..."
Abstract

Cited by 47 (3 self)
 Add to MetaCart
A new, general approach is described for approximate inference in firstorder probabilistic languages, using Markov chain Monte Carlo (MCMC) techniques in the space of concrete possible worlds underlying any given knowledge base. The simplicity of the approach and its lazy construction of possible worlds make it possible to consider quite expressive languages. In particular, we consider two extensions to the basic relational probability models (RPMs) defined by Koller and Pfeffer, both of which have caused difficulties for exact algorithms. The first extension deals with uncertainty about relations among objects, where MCMC samples over relational structures. The second extension deals with uncertainty about the identity of individuals, where MCMC samples over sets of equivalence classes of objects. In both cases, we identify types of probability distributions that allow local decomposition of inference while encoding possible domains in a plausible way. We apply our algorithms to simple examples and show that the MCMC approach scales well. 1
Logical Bayesian Networks and their relation to other probabilistic logical models
 In Proceedings of 15th International Conference on Inductive Logic Pogramming (ILP05), volume 3625 of Lecture Notes in Artificial Intelligence
, 2005
"... We review Logical Bayesian Networks, a language for probabilistic logical modelling, and discuss its relation to Probabilistic Relational Models and Bayesian Logic Programs. 1 Probabilistic Logical Models Probabilistic logical models are models combining aspects of probability theory with aspects of ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
We review Logical Bayesian Networks, a language for probabilistic logical modelling, and discuss its relation to Probabilistic Relational Models and Bayesian Logic Programs. 1 Probabilistic Logical Models Probabilistic logical models are models combining aspects of probability theory with aspects of Logic Programming, firstorder logic or relational languages. Recently a variety of languages to describe such models has been introduced. For some languages techniques exist to learn such models from data. Two examples are Probabilistic Relational Models (PRMs) [4] and Bayesian Logic Programs (BLPs) [5]. These two languages are probably the most popular and wellknown in the Relational Data Mining community. We introduce a new language, Logical Bayesian Networks (LBNs) [2], that is strongly related to PRMs and BLPs yet solves some of their problems with respect to knowledge representation (related to expressiveness and intuitiveness). PRMs, BLPs and LBNs all follow the principle of Knowledge Based Model Construction: they offer a language that can be used to specify general probabilistic logical knowledge and they provide a methodology to construct a propositional model based on this knowledge when given a specific
Naive Bayesian Classification of Structured Data
, 2003
"... In this paper we present 1BC and 1BC2, two systems that perform naive Bayesian classification of structured individuals. The approach of 1BC is to project the individuals along firstorder features. These features are built from the individual using structural predicates referring to related objects ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
In this paper we present 1BC and 1BC2, two systems that perform naive Bayesian classification of structured individuals. The approach of 1BC is to project the individuals along firstorder features. These features are built from the individual using structural predicates referring to related objects (e.g. atoms within molecules), and properties applying to the individual or one or several of its related objects (e.g. a bond between two atoms). We describe an individual in terms of elementary features consisting of zero or more structural predicates and one property; these features are treated as conditionally independent in the spirit of the naive Bayes assumption. 1BC2 represents an alternative firstorder upgrade to the naive Bayesian classifier by considering probability distributions over structured objects (e.g., a molecule as a set of atoms), and estimating those distributions from the probabilities of its elements (which are assumed to be independent). We present a unifying view on both systems in which 1BC works in language space, and 1BC2 works in individual space. We also present a new, efficient recursive algorithm improving upon the original propositionalisation approach of 1BC. Both systems have been implemented in the context of the firstorder descriptive learner Tertius, and we investigate the differences between the two systems both in computational terms and on artificially generated data. Finally, we describe a range of experiments on ILP benchmark data sets demonstrating the viability of our approach.
Markov logic in infinite domains
 In Proc. UAI07
, 2007
"... Combining firstorder logic and probability has long been a goal of AI. Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching weights to firstorder formulas and viewing them as templates for features of Markov networks. Unfortunately, it does not have the full power of firstord ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Combining firstorder logic and probability has long been a goal of AI. Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching weights to firstorder formulas and viewing them as templates for features of Markov networks. Unfortunately, it does not have the full power of firstorder logic, because it is only defined for finite domains. This paper extends Markov logic to infinite domains, by casting it in the framework of Gibbs measures (Georgii, 1988). We show that a Markov logic network (MLN) admits a Gibbs measure as long as each ground atom has a finite number of neighbors. Many interesting cases fall in this category. We also show that an MLN admits a unique measure if the weights of its nonunit clauses are small enough. We then examine the structure of the set of consistent measures in the nonunique case. Many important phenomena, including systems with phase transitions, are represented by MLNs with nonunique measures. We relate the problem of satisfiability in firstorder logic to the properties of MLN measures, and discuss how Markov logic relates to previous infinite models. 1
Modeling Discriminative Global Inference
 PROCEEDINGS OF THE FIRST IEEE INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING (ICSC)
, 2007
"... Many recent advances in complex domains such as Natural Language Processing (NLP) have taken a discriminative approach in conjunction with the global application of structural and domain specific constraints. We introduce LBJ, a new modeling language for specifying exact inference systems of this ty ..."
Abstract

Cited by 20 (15 self)
 Add to MetaCart
Many recent advances in complex domains such as Natural Language Processing (NLP) have taken a discriminative approach in conjunction with the global application of structural and domain specific constraints. We introduce LBJ, a new modeling language for specifying exact inference systems of this type, combining ideas from machine learning, optimization, First Order Logic (FOL), and Object Oriented Programming (OOP). Expressive constraints are specified declaratively as arbitrary FOL formulas over functions and objects. The language's runtime library translates them to a mathematical programming representation from which an exact solution is computed. In addition, the compiler leverages an existing OOP language: objects and functions are grouped as the OOP objects and methods that encapsulate the user's data.
Firstorder probabilistic languages: Into the unknown
 PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON INDUCTIVE LOGIC PROGRAMMING. (2007
, 2007
"... This paper surveys firstorder probabilistic languages (FOPLs), which combine the expressive power of firstorder logic with a probabilistic treatment of uncertainty. We provide a taxonomy that helps make sense of the profusion of FOPLs that have been proposed over the past fifteen years. We also e ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This paper surveys firstorder probabilistic languages (FOPLs), which combine the expressive power of firstorder logic with a probabilistic treatment of uncertainty. We provide a taxonomy that helps make sense of the profusion of FOPLs that have been proposed over the past fifteen years. We also emphasize the importance of representing uncertainty not just about the attributes and relations of a fixed set of objects, but also about what objects exist. This leads us to Bayesian logic, or BLOG, a new language for defining probabilistic models with unknown objects. We give a brief overview of BLOG syntax and semantics, and emphasize some of the design decisions that distinguish it from other languages. Finally, we consider the challenge of constructing FOPL models automatically from data.
Structured machine learning: the next ten years
, 2008
"... The field of inductive logic programming (ILP) has made steady progress, since the first ILP workshop in 1991, based on a balance of developments in theory, implementations and applications. More recently there has been an increased emphasis on Probabilistic ILP and the related fields of Statistic ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
The field of inductive logic programming (ILP) has made steady progress, since the first ILP workshop in 1991, based on a balance of developments in theory, implementations and applications. More recently there has been an increased emphasis on Probabilistic ILP and the related fields of Statistical Relational Learning (SRL) and Structured Prediction. The goal of the current paper is to consider these emerging trends and chart out the strategic directions and open problems for the broader area of structured machine learning for the next 10 years.
Towards learning stochastic logic programs from proofbanks
 In Proc. of AAAI’05
, 2005
"... Stochastic logic programs combine ideas from probabilistic grammars with the expressive power of definite clause logic; as such they can be considered as an extension of probabilistic contextfree grammars. Motivated by an analogy with learning treebank grammars, we study how to learn stochastic lo ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Stochastic logic programs combine ideas from probabilistic grammars with the expressive power of definite clause logic; as such they can be considered as an extension of probabilistic contextfree grammars. Motivated by an analogy with learning treebank grammars, we study how to learn stochastic logic programs from prooftrees. Using prooftrees as examples imposes strong logical constraints on the structure of the target stochastic logic program. These constraints can be integrated in the least general generalization (lgg) operator, which is employed to traverse the search space. Our implementation employs a greedy search guided by the maximum likelihood principle and failureadjusted maximization. We also report on a number of simple experiments that show the promise of the approach.