Results 1  10
of
24
MEBN: A Language for FirstOrder Bayesian Knowledge Bases
"... Although classical firstorder logic is the de facto standard logical foundation for artificial intelligence, the lack of a builtin, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and m ..."
Abstract

Cited by 45 (18 self)
 Add to MetaCart
Although classical firstorder logic is the de facto standard logical foundation for artificial intelligence, the lack of a builtin, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents MultiEntity Bayesian Networks (MEBN), a firstorder language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical firstorder theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable firstorder theory.
Gradientbased boosting for Statistical Relational Learning: The Relational Dependency Network Case
, 2011
"... Abstract. Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, co ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
Abstract. Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex modelselection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational functionapproximation problems using gradientbased boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to stateoftheart statistical relational learning approaches. 1
Parameter learning for relational bayesian networks
 In Proceedings of the International Conference in Machine Learning
, 2007
"... We present a method for parameter learning in relational Bayesian networks (RBNs). Our approach consists of compiling the RBN model into a computation graph for the likelihood function, and to use this likelihood graph to perform the necessary computations for a gradient ascent likelihood optimizati ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We present a method for parameter learning in relational Bayesian networks (RBNs). Our approach consists of compiling the RBN model into a computation graph for the likelihood function, and to use this likelihood graph to perform the necessary computations for a gradient ascent likelihood optimization procedure. The method can be applied to all RBN models that only contain differentiable combining rules. This includes models with nondecomposable combining rules, as well as models with weighted combinations or nested occurrences of combining rules. Experimental results on artificial random graph data explores the feasibility of the approach both for complete and incomplete data. 1.
FirstOrder Bayesian Logic
, 2005
"... Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Until recently, classical firstorder logic has reigned as the de facto standard logical foundation for artificial intelligence. The lack of a builtin, semantically grounded capability for reasoning under uncertai ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Until recently, classical firstorder logic has reigned as the de facto standard logical foundation for artificial intelligence. The lack of a builtin, semantically grounded capability for reasoning under uncertainty renders classical firstorder logic inadequate for many important classes of problems. Generalpurpose languages are beginning to emerge for which the fundamental logical basis is probability. Increasingly expressive probabilistic languages demand a theoretical foundation that fully integrates classical firstorder logic and probability. In firstorder Bayesian logic (FOBL), probability distributions are defined over interpretations of classical firstorder axiom systems. Predicates and functions of a classical firstorder theory correspond to a random variables in the corresponding firstorder Bayesian theory. This is a natural correspondence, given that random variables are formalized in mathematical statistics as measurable functions on a probability space. A formal system called MultiEntity Bayesian Networks (MEBN) is presented for composing distributions on interpretations by instantiating and combining parameterized fragments of directed graphical models. A construction is given of a MEBN theory that assigns a nonzero
Contextdependent incremental intention recognition through bayesian network model construction
 Bayesian Modelling Applications Workshop (BMAW11), Conference on Uncertainty in Artificial Intelligence (UAI2011). CEUR Workshop Proceedings
, 2011
"... We present a method for contextdependent and incremental intention recognition by means of incrementally constructing a Bayesian Network (BN) model as more actions are observed. It is achieved with the support of a knowledge base of readily maintained and constructed fragments of BNs. The simple st ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We present a method for contextdependent and incremental intention recognition by means of incrementally constructing a Bayesian Network (BN) model as more actions are observed. It is achieved with the support of a knowledge base of readily maintained and constructed fragments of BNs. The simple structure of the fragments enables to easily and efficiently acquire the knowledge base, either from domain experts or automatically from a plan corpus. We exhibit experimental results improvement for the Linux Plan corpus. For additional experimentation, new plan corpora for the iterated Prisoner’s Dilemma are created. We show that taking into account contextual information considerably increases intention recognition performance. 1
A relational hierarchical model for decisiontheoretic assistance
 In Proceedings of 17th Annual International Conference on Inductive Logic Programming
, 2007
"... Building intelligent assistants has been a longcherished goal of AI and many were built and finetuned to specific application domains. In recent work, a domainindependent decisiontheoretic model of assistance was proposed, where the task is to infer the user’s goal and take actions that minimize ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Building intelligent assistants has been a longcherished goal of AI and many were built and finetuned to specific application domains. In recent work, a domainindependent decisiontheoretic model of assistance was proposed, where the task is to infer the user’s goal and take actions that minimize the expected cost of the user’s policy. In this paper, we extend this work to domains where the user’s policies have rich relational and hierarchical structure. Our results indicate that relational hierarchies allow succinct encoding of prior knowledge for the assistant, which in turn enables the assistant to start helping the user after a relatively small amount of experience. 1
Exploiting Causal Independence in Markov Logic Networks: Combining Undirected and Directed Models
"... Abstract. A new method is proposed for compiling causal independencies into Markov logic networks (MLNs). A MLN can be viewed as compactly representing a factorization of a joint probability into the product of a set of factors guided by logical formulas. We present a notion of causal independence t ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. A new method is proposed for compiling causal independencies into Markov logic networks (MLNs). A MLN can be viewed as compactly representing a factorization of a joint probability into the product of a set of factors guided by logical formulas. We present a notion of causal independence that enables one to further factorize the factors into a combination of even smaller factors and consequently obtain a finergrain factorization of the joint probability. The causal independence lets us specify the factor in terms of weighted, directed clauses and operators, such as “or”, “sum ” or “max”, on the contribution of the variables involved in the factors, hence combining both undirected and directed knowledge. Our experimental evaluations shows that making use of the finergrain factorization provided by causal independence can improve quality of parameter learning in MLNs. 1
Boosting relational dependency networks
 In Proc. of the Int. Conf. on Inductive Logic Programming (ILP
, 2010
"... Abstract. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains where the joint probability distribution over the variables is approximated as a product of conditional distributions. The current learning algorithms for RDNs use pseudolikelih ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains where the joint probability distribution over the variables is approximated as a product of conditional distributions. The current learning algorithms for RDNs use pseudolikelihood techniques to learn probability trees for each variable in order to represent the conditional distribution. We propose the use of gradient tree boosting as applied by Dietterich et al.(2004) to approximate the gradient for each variable. The use of several regression trees, instead of just one, results in an expressive model. Our results in 3 different data sets show that this training method results in efficient learning of RDNs when compared to stateoftheart approaches to Statistical Relational Learning. 1
Locationbased reasoning about complex multiagent behavior
 In Journal of Artificial Intelligence Research. AI Access Foundation
, 2011
"... Recent research has shown that surprisingly rich models of human activity can be learned from GPS (positional) data. However, most effort to date has concentrated on modeling single individuals or statistical properties of groups of people. Moreover, prior work focused solely on modeling actual succ ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Recent research has shown that surprisingly rich models of human activity can be learned from GPS (positional) data. However, most effort to date has concentrated on modeling single individuals or statistical properties of groups of people. Moreover, prior work focused solely on modeling actual successful executions (and not failed or attempted executions) of the activities of interest. We, in contrast, take on the task of understanding human interactions, attempted interactions, and intentions from noisy sensor data in a fully relational multiagent setting. We use a realworld game of capture the flag to illustrate our approach in a welldefined domain that involves many distinct cooperative and competitive joint activities. We model the domain using Markov logic, a statisticalrelational language, and learn a theory that jointly denoises the data and infers occurrences of highlevel activities, such as a player capturing an enemy. Our unified model combines constraints imposed by the geometry of the game area, the motion model of the players, and by the rules and dynamics of the game in a probabilistically and logically sound fashion. We show that while it may be impossible to directly detect a multiagent activity due to sensor noise or malfunction, the occurrence of the activity can still be inferred by considering both its impact on the
Structure refinement in First Order Conditional Influence Language
 In Proceedings of the workshop on Open Problems in Statistical Relational Learning (SRL
, 2006
"... In this paper, we present preliminary results from learning the structure of firstorder conditional influence statements from data for the purpose of classification. In order to reduce the search space over structures, we formulate and address the structure learning problem as a problem of refining ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we present preliminary results from learning the structure of firstorder conditional influence statements from data for the purpose of classification. In order to reduce the search space over structures, we formulate and address the structure learning problem as a problem of refining the structure of a firstorder probabilistic program using training data. We use variants of the conditional BIC scoring metric to refine the program to best fit the data. We use a previously introduced language called FOCIL which consists of statements that can be instantiated and composed into a propositional Bayesian network. The results on a synthetic dataset and a realworld task show that the algorithm achieves error rates comparable to the gold standard program with a reasonable amount of training data. 1.