Results

**1 - 2**of**2**### First-Order Model Counting in a Nutshell *

"... Abstract First-order model counting recently emerged as a computational tool for high-level probabilistic reasoning. It is concerned with counting satisfying assignments to sentences in first-order logic and upgrades the successful propositional model counting approaches to probabilistic reasoning. ..."

Abstract
- Add to MetaCart

Abstract First-order model counting recently emerged as a computational tool for high-level probabilistic reasoning. It is concerned with counting satisfying assignments to sentences in first-order logic and upgrades the successful propositional model counting approaches to probabilistic reasoning. We give an overview of model counting as it is applied in statistical relational learning, probabilistic programming, databases, and hybrid reasoning. A short tutorial illustrates the principles behind these solvers. Finally, we show that first-order counting is a fundamentally different problem from the propositional counting techniques that inspired it. Why First-Order Model Counting? Model counting comes in many flavors, depending on the form of logic being used. In propositional model counting, or #SAT, one counts the number of assignments ω that satisfy a propositional sentence ∆, denoted ω |= ∆ The popularity of WMC can be explained as follows. Its formulation elegantly decouples the logical or symbolic representation from the statistical or numeric representation, which is encapsulated in the weight function. This is a sepa- * Companion paper for the IJCAI Early Career Spotlight track. ration of concerns that is quite familiar in AI: Prob. Distribution = Qualitative + Quantitative Bayesian network = DAG + CPTs Factor Graph = Bipartite Graph + Potentials Weighted model counting takes this to another level, where any logical language can now specify the qualitative structural properties of what constitutes a model. Independently, the weight function quantifies how probable each model is in the probability space. Prob. Distribution = Logic Sentence + Weights WMC = SAT Formula + Weights Formally, we have the following. Definition 1 (Weighted Model Count). The WMC of -a sentence ∆ in propositional logic over literals L, and -a weight function w : L → R, is defined as WMC(∆, w) = ω|=∆ l∈ω w(l). One benefit of this approach is that it leverages decades of work on formal semantics and logical reasoning. When building solvers, this allows us to reason about logical equivalence and reuse solver technology (such as constraint propagation and clause learning). WMC also naturally reasons about deterministic, hard constraints in a probabilistic context. First-Order Generalizations of Model Counting The model counting philosophy has recently transformed several areas of uncertainty reasoning. First-Order Logic Within statistical relational learning [Getoor and Taskar, 2007], lifted inference algorithms aim for efficient probabilistic inference in relational models

### Open-World Probabilistic Databaseṡ

"... Abstract Large-scale probabilistic knowledge bases are becoming increasingly important in academia and industry alike. They are constantly extended with new data, powered by modern information extraction tools that associate probabilities with database tuples. In this paper, we revisit the semantic ..."

Abstract
- Add to MetaCart

Abstract Large-scale probabilistic knowledge bases are becoming increasingly important in academia and industry alike. They are constantly extended with new data, powered by modern information extraction tools that associate probabilities with database tuples. In this paper, we revisit the semantics underlying such systems. In particular, the closed-world assumption of probabilistic databases, that facts not in the database have probability zero, clearly conflicts with their everyday use. To address this discrepancy, we propose an open-world probabilistic database semantics, which relaxes the probabilities of open facts to intervals. While still assuming a finite domain, this semantics can provide meaningful answers when some probabilities are not precisely known. For this openworld setting, we propose an efficient evaluation algorithm for unions of conjunctive queries. Our open-world algorithm incurs no overhead compared to closed-world reasoning and runs in time linear in the size of the database for tractable queries. All other queries are #P-hard, implying a data complexity dichotomy between linear time and #P. For queries involving negation, however, open-world reasoning can become NP-, or even NP PP -hard. Finally, we discuss additional knowledge-representation layers that can further strengthen open-world reasoning about big uncertain data.