Results 11  20
of
149
A Guide to the Literature on Learning Probabilistic Networks From Data
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Abstract

Cited by 172 (0 self)
 Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra...
Causal Diagrams For Empirical Research
"... The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subjectmatter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if ..."
Abstract

Cited by 172 (35 self)
 Add to MetaCart
The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subjectmatter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained. Key words: Causal inference, graph models, interventions treatment effect 1 Introduction The tools introduced in this paper are aimed at helping researchers communicate qualitative assumptions about causeeffect relationships, elucidate the ramifications of such assumptions, and derive causal inferences from a combination...
A Bayesian approach to learning Bayesian networks with local structure
 In Proceedings of Thirteenth Conference on Uncertainty in Artificial Intelligence
, 1997
"... Recently several researchers have investigated techniques for using data to learn Bayesian networks containing compact representations for the conditional probability distributions (CPDs) stored at each node. The majority of this work has concentrated on using decisiontree representations for the C ..."
Abstract

Cited by 166 (13 self)
 Add to MetaCart
Recently several researchers have investigated techniques for using data to learn Bayesian networks containing compact representations for the conditional probability distributions (CPDs) stored at each node. The majority of this work has concentrated on using decisiontree representations for the CPDs. In addition, researchers typically apply nonBayesian (or asymptotically Bayesian) scoring functions such as MDL to evaluate the goodnessoffit of networks to the data. In this paper we investigate a Bayesian approach to learning Bayesian networks that contain the more general decisiongraph representations of the CPDs. First, we describe how to evaluate the posterior probability— that is, the Bayesian score—of such a network, given a database of observed cases. Second, we describe various search spaces that can be used, in conjunction with a scoring function and a search procedure, to identify one or more highscoring networks. Finally, we present an experimental evaluation of the search spaces, using a greedy algorithm and a Bayesian scoring function. 1
Learning Bayesian Networks is NPHard
, 1994
"... Algorithms for learning Bayesian networks from data have two components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodnessoffit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et ..."
Abstract

Cited by 130 (2 self)
 Add to MetaCart
Algorithms for learning Bayesian networks from data have two components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodnessoffit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et al. (1994) introduced a Bayesian metric, called the BDe metric, that computes the relative posterior probability of a network structure given data. They show that the metric has a property desireable for inferring causal structure from data. In this paper, we show that the problem of deciding whether there is a Bayesian networkamong those where each node has at most k parentsthat has a relative posterior probability greater than a given constant is NPcomplete, when the BDe metric is used. 1 Introduction Recently, many researchers have begun to investigate methods for learning Bayesian networks, including Bayesian methods [Cooper and Herskovits, 1991, Buntine, 1991, York 1992, Spiegel...
Learning Equivalence Classes Of Bayesian Network Structures
, 1996
"... Approaches to learning Bayesian networks from data typically combine a scoring metric with a heuristic search procedure. Given aBayesian network structure, many of the scoring metrics derived in the literature return a score for the entire equivalence class to which the structure belongs. When ..."
Abstract

Cited by 129 (1 self)
 Add to MetaCart
Approaches to learning Bayesian networks from data typically combine a scoring metric with a heuristic search procedure. Given aBayesian network structure, many of the scoring metrics derived in the literature return a score for the entire equivalence class to which the structure belongs. When using such a metric, it is appropriate for the heuristic search algorithm to searchover equivalence classes of Bayesian networks as opposed to individual structures. We present the general formulation of a search space for which the states of the search correspond to equivalence classes of structures. Using this space, anyoneofanumber of heuristic searchalgorithms can easily be applied. We compare greedy search performance in the proposed search space to greedy search performance in a search space for which the states correspond to individual Bayesian network structures. 1
MachineLearning Research  Four Current Directions
"... Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up super ..."
Abstract

Cited by 114 (1 self)
 Add to MetaCart
Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up supervised learning algorithms, (c) reinforcement learning, and (d) learning complex stochastic models.
Learning Bayesian Networks from Data: An InformationTheory Based Approach
"... This paper provides algorithms that use an informationtheoretic analysis to learn Bayesian network structures from data. Based on our threephase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional indepe ..."
Abstract

Cited by 92 (5 self)
 Add to MetaCart
This paper provides algorithms that use an informationtheoretic analysis to learn Bayesian network structures from data. Based on our threephase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.
Markovian Models for Sequential Data
, 1996
"... Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We firs ..."
Abstract

Cited by 84 (2 self)
 Add to MetaCart
Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We first summarize the basics of HMMs, and then review several recent related learning algorithms and extensions of HMMs, including in particular hybrids of HMMs with artificial neural networks, InputOutput HMMs (which are conditional HMMs using neural networks to compute probabilities), weighted transducers, variablelength Markov models and Markov switching statespace models. Finally, we discuss some of the challenges of future research in this very active area. 1 Introduction Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many applications in artificial intelligence, pattern recognition, speech recognition, and modeling of biological ...
Local Learning in Probabilistic Networks With Hidden Variables
, 1995
"... Probabilistic networks, which provide compact descriptions of complex stochastic relationships among several random variables, are rapidly becoming the tool of choice for uncertain reasoning in artificial intelligence. We show that networks with fixed structure containing hidden variables can be lea ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Probabilistic networks, which provide compact descriptions of complex stochastic relationships among several random variables, are rapidly becoming the tool of choice for uncertain reasoning in artificial intelligence. We show that networks with fixed structure containing hidden variables can be learned automatically from data using a gradientdescent mechanism similar to that used in neural networks. We also extend the method to networks with intensionally represented distributions, including networks with continuous variables and dynamic probabilistic networks. Because probabilistic networks provide explicit representations of causal structure, human experts can easily contribute prior knowledge to the training process, thereby significantly improving the learning rate. Adaptive probabilistic networks (APNs) may soon compete directly with neural networks as models in computational neuroscience as well as in industrial and financial applications. 1 Introduction Intelligent systems, ...
Causal independence for probability assessment and inference using Bayesian networks
 IEEE Trans. on Systems, Man and Cybernetics
, 1994
"... ABayesian network is a probabilistic representation for uncertain relationships, which has proven to be useful for modeling realworld problems. When there are many potential causes of a given e ect, however, both probability assessment and inference using a Bayesian network can be di cult. In this ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
ABayesian network is a probabilistic representation for uncertain relationships, which has proven to be useful for modeling realworld problems. When there are many potential causes of a given e ect, however, both probability assessment and inference using a Bayesian network can be di cult. In this paper, we describe causal independence, a collection of conditional independence assertions and functional relationships that are often appropriate to apply to the representation of the uncertain interactions between causes and e ect. We show how the use of causal independence in a Bayesian network can greatly simplify probability assessment aswell as probabilistic inference. 1