Results 1  10
of
115
Probabilistic Theorem Proving
"... Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logic ..."
Abstract

Cited by 70 (23 self)
 Add to MetaCart
(Show Context)
Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and firstorder theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation. 1
Tuffy: Scaling up Statistical Inference in Markov Logic Networks using an RDBMS
, 2011
"... Markov Logic Networks (MLNs) have emerged as a powerful framework that combines statistical and logical reasoning; they have been applied to many data intensive problems including information extraction, entity resolution, and text mining. Current implementations of MLNs do not scale to large realw ..."
Abstract

Cited by 56 (9 self)
 Add to MetaCart
(Show Context)
Markov Logic Networks (MLNs) have emerged as a powerful framework that combines statistical and logical reasoning; they have been applied to many data intensive problems including information extraction, entity resolution, and text mining. Current implementations of MLNs do not scale to large realworld data sets, which is preventing their widespread adoption. We present Tuffy that achieves scalability via three novel contributions: (1) a bottomup approach to grounding that allows us to leverage the full power of the relational optimizer, (2) a novel hybrid architecture that allows us to perform AIstyle local search efficiently using an RDBMS, and (3) a theoretical insight that shows when one can (exponentially) improve the efficiency of stochastic local search. We leverage (3) to build novel partitioning, loading, and parallel algorithms. We show that our approach outperforms stateoftheart implementations in both quality and speed on several publicly available datasets.
Counting belief propagation
 In Proc. UAI09
, 2009
"... A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP ..."
Abstract

Cited by 53 (20 self)
 Add to MetaCart
(Show Context)
A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of magnitude. 1
Gradientbased boosting for Statistical Relational Learning: The Relational Dependency Network Case
, 2011
"... Abstract. Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, co ..."
Abstract

Cited by 39 (17 self)
 Add to MetaCart
Abstract. Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex modelselection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational functionapproximation problems using gradientbased boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to stateoftheart statistical relational learning approaches. 1
Lifted Probabilistic Inference by FirstOrder Knowledge Compilation
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indis ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indistinguishable groups of objects as one, help mitigate this cost. Seeking inspiration from logical inference, where lifted inference (e.g., resolution) is commonly performed, we develop a model theoretic approach to probabilistic lifted inference. Our algorithm compiles a firstorder probabilistic theory into a firstorder deterministic decomposable negation normal form (dDNNF) circuit. Compilation offers the advantage that inference is polynomial in the size of the circuit. Furthermore, by borrowing techniques from the knowledge compilation literature our algorithm effectively exploits the logical structure (e.g., contextspecific independencies) within the firstorder model, which allows more computation to be done at the lifted level. An empirical comparison demonstrates the utility of the proposed approach.
Maxmargin weight learning for Markov logic networks
 In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD09). Bled
, 2009
"... Abstract. Markov logic networks (MLNs) are an expressive representation for statistical relational learning that generalizes both firstorder logic and graphical models. Existing discriminative weight learning methods for MLNs all try to learn weights that optimize the Conditional Log Likelihood (CL ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Markov logic networks (MLNs) are an expressive representation for statistical relational learning that generalizes both firstorder logic and graphical models. Existing discriminative weight learning methods for MLNs all try to learn weights that optimize the Conditional Log Likelihood (CLL) of the training examples. In this work, we present a new discriminative weight learning method for MLNs based on a maxmargin framework. This results in a new model, MaxMargin Markov Logic Networks (M3LNs), that combines the expressiveness of MLNs with the predictive accuracy of structural Support Vector Machines (SVMs). To train the proposed model, we design a new approximation algorithm for lossaugmented inference in MLNs based on Linear Programming (LP). The experimental result shows that the proposed approach generally achieves higher F1 scores than the current best discriminative weight learner for MLNs. 1
Speeding up inference in Markov logic networks by preprocessing to reduce the size of the resulting grounded network. IJCAI09
"... Statisticalrelational reasoning has received much attention due to its ability to robustly model complex relationships. A key challenge is tractable inference, especially in domains involving many objects, due to the combinatorics involved. One can accelerate inference by using approximation techni ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Statisticalrelational reasoning has received much attention due to its ability to robustly model complex relationships. A key challenge is tractable inference, especially in domains involving many objects, due to the combinatorics involved. One can accelerate inference by using approximation techniques, “lazy ” algorithms, etc. We consider Markov Logic Networks (MLNs), which involve counting how often logical formulae are satisfied. We propose a preprocessing algorithm that can substantially reduce the effective size of MLNs by rapidly counting how often the evidence satisfies each formula, regardless of the truth values of the query literals. This is a general preprocessing method that loses no information and can be used for any MLN inference algorithm. We evaluate our algorithm empirically in three realworld domains, greatly reducing the work needed during subsequent inference. Such reduction might even allow exact inference to be performed when sampling methods would be otherwise necessary. 1
Bisimulationbased approximate lifted inference
"... There has been a great deal of recent interest in methods for performing lifted inference; however, most of this work assumes that the firstorder model is given as input to the system. Here, we describe lifted inference algorithms that determine symmetries and automatically lift the probabilistic m ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
(Show Context)
There has been a great deal of recent interest in methods for performing lifted inference; however, most of this work assumes that the firstorder model is given as input to the system. Here, we describe lifted inference algorithms that determine symmetries and automatically lift the probabilistic model to speedup inference. In particular, we describe approximate lifted inference techniques that allow the user to trade off inference accuracy for computational efficiency by using a handful of tunable parameters, while keeping the error bounded. Our algorithms are closely related to the graphtheoretic concept of bisimulation. We report experiments on both synthetic and real data to show that in the presence of symmetries, runtimes for inference can be improved significantly, with approximate lifted inference providing orders of magnitude speedup over ground inference.
Computing query probability with incidence algebras
 In PODS
"... We describe an algorithm that evaluates queries over probabilistic databases using Mobius ’ inversion formula in incidence algebras. The queries we consider are unions of conjunctive queries (equivalently: existential, positive First Order sentences), and the probabilistic databases are tupleindepe ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
(Show Context)
We describe an algorithm that evaluates queries over probabilistic databases using Mobius ’ inversion formula in incidence algebras. The queries we consider are unions of conjunctive queries (equivalently: existential, positive First Order sentences), and the probabilistic databases are tupleindependent structures. Our algorithm runs in PTIME on a subset of queries called ”safe ” queries, and is complete, in the sense that every unsafe query is hard for the class F P #P. The algorithm is very simple and easy to implement in practice, yet it is nonobvious. Mobius ’ inversion formula, which is in essence inclusionexclusion, plays a key role for completeness, by allowing the algorithm to compute the probability of some safe queries even when they have some subqueries that are unsafe. We also apply the same latticetheoretic techniques to analyze an algorithm based on lifted conditioning, and prove that it is incomplete. 1