Results 1  10
of
279
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 796 (20 self)
 Add to MetaCart
restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly
Stochastic Dynamic Programming with Factored Representations
, 1997
"... Markov decision processes(MDPs) have proven to be popular models for decisiontheoretic planning, but standard dynamic programming algorithms for solving MDPs rely on explicit, statebased specifications and computations. To alleviate the combinatorial problems associated with such methods, we prop ..."
Abstract

Cited by 189 (10 self)
 Add to MetaCart
pose new representational and computational techniques for MDPs that exploit certain types of problem structure. We use dynamic Bayesian networks (with decision trees representing the local families of conditional probability distributions) to represent stochastic actions in an MDP, together with a
Multiagent Planning with Factored MDPs
 In NIPS14
, 2001
"... We present a principled and efficient planning algorithm for cooperative multiagent dynamic systems. A striking feature of our method is that the coordination and communication between the agents is not imposed, but derived directly from the system dynamics and function approximation architecture ..."
Abstract

Cited by 176 (15 self)
 Add to MetaCart
architecture. We view the entire multiagent system as a single, large Markov decision process (MDP), which we assume can be represented in a factored way using a dynamic Bayesian network (DBN). The action space of the resulting MDP is the joint action space of the entire set of agents. Our approach
Generating Bayesian Networks from Probability Logic Knowledge Bases
 In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence
, 1994
"... We present a method for dynamically generating Bayesian networks from knowledge bases consisting of firstorder probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discretevalued nodes. We impose constraints on the fo ..."
Abstract

Cited by 60 (8 self)
 Add to MetaCart
on the form of the sentences that guarantee that the knowledge base contains all the probabilistic information necessary to generate a network. We define the concept of dseparation for knowledge bases and prove that a knowledge base with independence conditions defined by dseparation is a complete
Efficient Solution Algorithms for Factored MDPs
, 2003
"... This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the re ..."
Abstract

Cited by 172 (3 self)
 Add to MetaCart
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction
Building Classifiers using Bayesian Networks
 In Proceedings of the thirteenth national conference on artificial intelligence
, 1996
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 92 (2 self)
 Add to MetaCart
restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes
Using graphical models and genomic expression data to statistically validate models of genetic regulatory networks
 Pacific Symposium on Biocomputing
, 2001
"... We propose a modeldriven approach for analyzing genomic expression data that permits genetic regulatory networks to be represented in a biologically interpretable computational form. Our models permit latent variables capturing unobserved factors, describe arbitrarily complex (more than pairwise) ..."
Abstract

Cited by 171 (7 self)
 Add to MetaCart
We propose a modeldriven approach for analyzing genomic expression data that permits genetic regulatory networks to be represented in a biologically interpretable computational form. Our models permit latent variables capturing unobserved factors, describe arbitrarily complex (more than pair
Factorization of Quantum Density Matrices According to Bayesian and Markov Networks
, 2008
"... We show that any quantum density matrix can be represented by a Bayesian network (a directed acyclic graph), and also by a Markov network (an undirected graph). We show that any Bayesian or Markov net that represents a density matrix, is logically equivalent to a set of conditional independencies (s ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(symmetries) satisfied by the density matrix. We show that the dseparation theorems of classical Bayesian and Markov networks generalize in a simple and natural way to quantum physics. The quantum dseparation theorems are shown to be closely connected to quantum entanglement. We show that the graphical
An Algorithm to Learning Bayesian Network
"... Abstract. Bayesian network is a graphical model appropriated to represent and to analyze uncertainty, knowledge and beliefs contained implicitly in the data. In this paper we propose the XPC algorithm for structural learning in Bayesian networks using decomposable metrics in families (a variable and ..."
Abstract
 Add to MetaCart
and its parents) in order to obtain the maximumscore network. The concept of conditional independence, based on Pearl’s dseparation, is used to identify conflicting regions, where the existence of some edges depends on the nonexistence of others. Hence, the user is required to choose which edges
Augmented Bayesian Networks for Representing Information Equivalences and Extensions to the PC Learning Algorithm.
, 2007
"... Data containing deterministic relations entail conditional independencies that cannot be represented by a faithful graph, due to violations of the intersection condition. Such data can not be handled by current constraintbased learning algorithms. More generally, these violations are characterized ..."
Abstract
 Add to MetaCart
of the graph is reestablished by using the generalized definition of the dseparation criterion, called Deqseparation, and by limiting the conditional independencies that are graphically described with the simplicity condition. Based on this, an extension to the PC learning algorithm is developed that allows
Results 1  10
of
279