Results 1  10
of
53
Markov Logic Networks
 Machine Learning
, 2006
"... Abstract. We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects ..."
Abstract

Cited by 569 (34 self)
 Add to MetaCart
Abstract. We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a firstorder formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudolikelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a realworld database and knowledge base in a university domain illustrate the promise of this approach.
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 564 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
Learning probabilistic relational models
 In IJCAI
, 1999
"... A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the r ..."
Abstract

Cited by 510 (28 self)
 Add to MetaCart
A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend wellknown statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases. 1
Probabilistic FrameBased Systems
 In Proc. AAAI
, 1998
"... Two of the most important threads of work in knowledge representation today are framebased representation systems (FRS's) and Bayesian networks (BNs). FRS's provide an excellent representation for the organizational structure of large complex domains, but their applicability is limited because of t ..."
Abstract

Cited by 189 (18 self)
 Add to MetaCart
Two of the most important threads of work in knowledge representation today are framebased representation systems (FRS's) and Bayesian networks (BNs). FRS's provide an excellent representation for the organizational structure of large complex domains, but their applicability is limited because of their inability to deal with uncertainty and noise. BNs provide an intuitive and coherent probabilistic representation of our uncertainty, but are very limited in their ability to handle complex structured domains. In this paper, we provide a language that cleanly integrates these approaches, preserving the advantages of both. Our approach allows us to provide natural and compact definitions of probability models for a class, in a way that is local to the class frame. These models can be instantiated for any set of interconnected instances, resulting in a coherent probability distribution over the instance properties. Our language also allows us to represent important types of uncertainty tha...
Answering Queries from ContextSensitive Probabilistic Knowledge Bases
 Theoretical Computer Science
, 1996
"... We define a language for representing contextsensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We p ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
We define a language for representing contextsensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a query answering procedure which takes a query Q and a set of evidence E and constructs a Bayesian network to compute P (QjE). The posterior probability is then computed using any of a number of Bayesian network inference algorithms. We use the declarative semantics to prove the query procedure sound and complete. We use concepts from logic programming to justify our approach. Keywords: reasoning under uncertainty, Bayesian networks, Probability model construction, logic programming Submitted to Theoretical Computer Science special issue on Uncertainty in Databases and Deductive Systems. This work was partially supported by NSF g...
Lifted firstorder probabilistic inference
 In Proceedings of IJCAI05, 19th International Joint Conference on Artificial Intelligence
, 2005
"... Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting firstorder specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poo ..."
Abstract

Cited by 88 (7 self)
 Add to MetaCart
Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting firstorder specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the firstorder level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a firstorder level, and that can be applied to any firstorder model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference. 1
Dynamic Belief Networks for Discrete Monitoring
 IEEE Transactions on Systems, Man, and Cybernetics
, 1994
"... We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is datadriven. However the model construction process combines sensor data about events with externally provided information about agents' behaviour, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using lightbeam sensor data. In addition to presenting a ...
Prospects for preferences
 Computational Intelligence
, 2004
"... This article examines prospects for theories and methods of preferences, both in the specific sense of the preferences of the ideal rational agents considered in economics and decision theory and in the broader interplay between reasoning and rationality considered in philosophy, psychology, and art ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
This article examines prospects for theories and methods of preferences, both in the specific sense of the preferences of the ideal rational agents considered in economics and decision theory and in the broader interplay between reasoning and rationality considered in philosophy, psychology, and artificial intelligence. Modern applications seek to employ preferences as means for specifying, designing, and controlling rational behaviors as well as descriptive means for understanding behaviors. We seek to understand the nature and representation of preferences by examining the roles, origins, meaning, structure, evolution, and application of preferences.
Generating Bayesian Networks from Probability Logic Knowledge Bases
 In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence
, 1994
"... We present a method for dynamically generating Bayesian networks from knowledge bases consisting of firstorder probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discretevalued nodes. We impose constraints on the fo ..."
Abstract

Cited by 52 (8 self)
 Add to MetaCart
We present a method for dynamically generating Bayesian networks from knowledge bases consisting of firstorder probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discretevalued nodes. We impose constraints on the form of the sentences that guarantee that the knowledge base contains all the probabilistic information necessary to generate a network. We define the concept of dseparation for knowledge bases and prove that a knowledge base with independence conditions defined by dseparation is a complete specification of a probability distribution. We present a network generation algorithm that, given an inference problem in the form of a query Q and a set of evidence E, generates a network to compute P (QjE). We prove the algorithm to be correct. 1 Introduction The flexibility of Bayesian networks for representing probabilistic dependencies and the relative efficiency of computational techniques for p...