Results 1  10
of
12
Bottomup learning of Markov logic network structure
 In Proceedings of the TwentyFourth International Conference on Machine Learning
, 2007
"... Markov logic networks (MLNs) are a statistical relational model that consists of weighted firstorder clauses and generalizes firstorder logic and Markov networks. The current stateoftheart algorithm for learning MLN structure follows a topdown paradigm where many potential candidate structures a ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
Markov logic networks (MLNs) are a statistical relational model that consists of weighted firstorder clauses and generalizes firstorder logic and Markov networks. The current stateoftheart algorithm for learning MLN structure follows a topdown paradigm where many potential candidate structures are systematically generated without considering the data and then evaluated using a statistical measure of their fit to the data. Even though this existing algorithm outperforms an impressive array of benchmarks, its greedy search is susceptible to local maxima or plateaus. We present a novel algorithm for learning MLN structure that follows a more bottomup approach to address this problem. Our algorithm uses a “propositional ” Markov network learning method to construct “template” networks that guide the construction of candidate clauses. Our algorithm significantly improves accuracy and learning time over the existing topdown approach in three realworld domains. 1.
Learning Markov logic network structure via hypergraph lifting
 In Proceedings of the 26th International Conference on Machine Learning (ICML09
, 2009
"... Markov logic networks (MLNs) combine logic and probability by attaching weights to firstorder clauses, and viewing these as templates for features of Markov networks. Learning MLN structure from a relational database involves learning the clauses and weights. The stateoftheart MLN structure lear ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
Markov logic networks (MLNs) combine logic and probability by attaching weights to firstorder clauses, and viewing these as templates for features of Markov networks. Learning MLN structure from a relational database involves learning the clauses and weights. The stateoftheart MLN structure learners all involve some element of greedily generating candidate clauses, and are susceptible to local optima. To address this problem, we present an approach that directly utilizes the data in constructing candidates. A relational database can be viewed as a hypergraph with constants as nodes and relations as hyperedges. We find paths of true ground atoms in the hypergraph that are connected via their arguments. To make this tractable (there are exponentially many paths in the hypergraph), we lift the hypergraph by jointly clustering the constants to form higherlevel concepts, and find paths in it. We variabilize the ground atoms in each path, and use them to form clauses, which are evaluated using a pseudolikelihood measure. In our experiments on three realworld datasets, we find that our algorithm outperforms the stateoftheart approaches. 1.
A Probabilistic Framework for Learning Kinematic Models of Articulated Objects
"... Robots operating in domestic environments generally need to interact with articulated objects, such as doors, cabinets, dishwashers or fridges. In this work, we present a novel, probabilistic framework for modeling articulated objects as kinematic graphs. Vertices in this graph correspond to object ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Robots operating in domestic environments generally need to interact with articulated objects, such as doors, cabinets, dishwashers or fridges. In this work, we present a novel, probabilistic framework for modeling articulated objects as kinematic graphs. Vertices in this graph correspond to object parts, while edges between them model their kinematic relationship. In particular, we present a set of parametric and nonparametric edge models and how they can robustly be estimated from noisy pose observations. We furthermore describe how to estimate the kinematic structure and how to use the learned kinematic models for pose prediction and for robotic manipulation tasks. We finally present how the learned models can be generalized to new and previously unseen objects. In various experiments using real robots with different camera systems as well as in simulation, we show that our approach is valid, accurate and efficient. Further, we demonstrate that our approach has a broad set of applications, in particular for the emerging fields of mobile manipulation and service robotics. 1.
Efficient Markov Network Discovery Using Particle Filters
"... In this paper we introduce an efficient independencebased algorithm for the induction of the Markov network structure of a domain from the outcomes of independence test conducted on data. Our algorithm utilizes a particle filter (sequential Monte Carlo) method to maintain a population of Markov net ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we introduce an efficient independencebased algorithm for the induction of the Markov network structure of a domain from the outcomes of independence test conducted on data. Our algorithm utilizes a particle filter (sequential Monte Carlo) method to maintain a population of Markov network structures that represent the posterior probability distribution over structures, given the outcomes of the tests performed. This enables us to select, at each step, the maximally informative test to conduct next from a pool of candidates according to information gain, which minimizes the cost of the statistical tests conducted on data. This makes our approach useful in domains where independence tests are expensive, such as cases of very large data sets and/or distributed data. In addition, our method maintains multiple candidate structures weighed by posterior probability, which allows flexibility in the presence of potential errors in the test outcomes.
Learning Markov Networks With Arithmetic Circuits
"... Markov networks are an effective way to represent complex probability distributions. However, learning their structure and parameters or using them to answer queries is typically intractable. One approach to making learning and inference tractable is to use approximations, such as pseudolikelihood ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Markov networks are an effective way to represent complex probability distributions. However, learning their structure and parameters or using them to answer queries is typically intractable. One approach to making learning and inference tractable is to use approximations, such as pseudolikelihood or approximate inference. An alternate approach is to use a restricted class of models where exact inference is always efficient. Previous work has explored low treewidth models, models with treestructured features, and latent variable models. In this paper, we introduce ACMN, the first ever method for learning efficient Markov networks with arbitrary conjunctive features. The secret to ACMN’s greater flexibility is its use of arithmetic circuits, a lineartime inference representation that can handle many high treewidth models by exploiting local structure. ACMN uses the size of the corresponding arithmetic circuit as a learning bias, allowing it to trade off accuracy and inference complexity. In experiments on 12 standard datasets, the tractable models learned by ACMN are more accurate than both tractable models learned by other algorithms and approximate inference in intractable models. 1
Improving Learning of Markov Logic Networks using Transfer and BottomUp Induction
"... Statistical relational learning (SRL) algorithms combine ideas from rich knowledge representations, such as firstorder logic, with those from probabilistic graphical models, such as Markov networks, to address the problem of learning from multirelational data. One challenge posed by such data is t ..."
Abstract
 Add to MetaCart
Statistical relational learning (SRL) algorithms combine ideas from rich knowledge representations, such as firstorder logic, with those from probabilistic graphical models, such as Markov networks, to address the problem of learning from multirelational data. One challenge posed by such data is that individual instances are frequently very large and include complex relationships among the entities. Moreover, because separate instances do not follow the same structure and contain varying numbers of entities, they cannot be effectively represented as a featurevector. SRL models and algorithms have been successfully applied to a wide variety of domains such as social network analysis, biological data analysis, and planning, among others. Markov logic networks (MLNs) are a recentlydeveloped SRL model that consists of weighted firstorder clauses. MLNs can be viewed as templates that define Markov networks when provided with the set of constants present in a domain. MLNs are therefore very powerful because they inherit the expressivity of firstorder logic. At the same time, MLNs can flexibly deal with noisy or uncertain data to produce probabilistic predictions for a set of propositions. MLNs have also been shown to subsume several other popular SRL models. The expressive power of MLNs comes at a cost: structure learning, or learning the firstorder clauses
Reconstructing Chemical Reaction Networks: Data Mining meets System Identification
"... We present an approach to reconstructing chemical reaction networks from time series measurements of the concentrations of the molecules involved. Our solution strategy combines techniques from numerical sensitivity analysis and probabilistic graphical models. By modeling a chemical reaction system ..."
Abstract
 Add to MetaCart
We present an approach to reconstructing chemical reaction networks from time series measurements of the concentrations of the molecules involved. Our solution strategy combines techniques from numerical sensitivity analysis and probabilistic graphical models. By modeling a chemical reaction system as a Markov network (undirected graphical model), we show how systematically probing for sensitivities between molecular species can identify the topology of the network. Given the topology, our approach next uses detailed sensitivity profiles to characterize properties of reactions such as reversibility, enzymecatalysis, and the precise stoichiometries of the reactants and products. We demonstrate applications to reconstructing key biological systems including the yeast cell cycle. In addition to network reconstruction, our algorithm finds applications in model reduction and model comprehension. We argue that our reconstruction algorithm can serve as an important primitive for data mining in systems biology applications.
Learning with Markov Logic Networks: Transfer Learning, Structure Learning, and an Application to Web Query Disambiguation
, 2009
"... ..."
Under consideration for publication in Knowledge and Information Systems Discovering Excitatory Relationships using Dynamic Bayesian Networks
"... Abstract. Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and humancomputer interaction modeling. In this paper we introduce the notion of excitatory networks which are essentially tempor ..."
Abstract
 Add to MetaCart
Abstract. Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and humancomputer interaction modeling. In this paper we introduce the notion of excitatory networks which are essentially temporal models where all connections are stimulative, rather than inhibitive. The emphasis on excitatory connections facilitates learning of network models by creating bridges to frequent episode mining. Specifically, we show that frequent episodes help identify nodes with high mutual information relationships and that such relationships can be summarized into a dynamic Bayesian network (DBN). This leads to an algorithm that is significantly faster than stateoftheart methods for inferring DBNs, while simultaneously providing theoretical guarantees on network optimality. We demonstrate the advantages of our approach through an application in neuroscience, where we show how strong excitatory networks can be efficiently inferred from both mathematical models of spiking neurons and several real neuroscience datasets.
Toward Markov Logic with Conditional Probabilities
, 2008
"... Combining probability and firstorder logic has been the subject of intensive research during the last ten years. The most wellknown formalisms combining probability and some subset of firstorder logic are probabilistic relational models (PRMs), Bayesian logic programs (BLPs) and Markov logic netw ..."
Abstract
 Add to MetaCart
Combining probability and firstorder logic has been the subject of intensive research during the last ten years. The most wellknown formalisms combining probability and some subset of firstorder logic are probabilistic relational models (PRMs), Bayesian logic programs (BLPs) and Markov logic networks (MLNs). Of these three formalisms, MLN is the currently most actively researched. While the subset of firstorder logic used by Markov logic networks is more expressive than that of the other two formalisms, its probabilistic semantic is given by weights assigned to formulas, which limits the comprehensibility of MLNs. Based on a knowledge representation formalism developed for propositional probabilistic models, we propose an alternative way to specify Markov logic networks, which allows the specification of probabilities for the formulas of a MLN. This results in better comprehensibility, and might open the way for using background knowledge when learning MLNs or even for the use of MLNs for probabilistic expert systems.