Results 1 
6 of
6
Parameter adjustment in Bayes networks. The generalized noisy ORgate
 IN PROCEEDINGS OF THE 9TH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1993
"... Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian networks and proposed three models for the representation of conditional probabilities. A forth model, shown here, assumes that the parameter distribution is given by a product of Gaussian functions and updates them from ..."
Abstract

Cited by 70 (12 self)
 Add to MetaCart
Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian networks and proposed three models for the representation of conditional probabilities. A forth model, shown here, assumes that the parameter distribution is given by a product of Gaussian functions and updates them from the and messages of evidence propagation. We also generalize the noisy ORgate for multivalued variables, develop the algorithm to compute probability in time proportional to the number of parents (even in networks with loops) and apply the learning model to this gate.
Update rules for parameter estimation in Bayesian networks
, 1997
"... This paper reexamines the problem of parameter estimation in Bayesian networks with missing values and hidden variables from the perspective of recent work in online learning [12]. We provide a unified framework for parameter estimation that encompasses both online learning, where the model is co ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
This paper reexamines the problem of parameter estimation in Bayesian networks with missing values and hidden variables from the perspective of recent work in online learning [12]. We provide a unified framework for parameter estimation that encompasses both online learning, where the model is continuously adapted to new data cases as they arrive, and the more traditional batch learning, where a preaccumulated set of samples is used in a onetime model selection process. In the batch case, our framework encompassesboth the gradient projection algorithm [2, 3] and the EM algorithm [14] for Bayesian networks. The framework also leads to new online and batch parameter update schemes, including a parameterized version of EM. We provide both empirical and theoretical results indicating that parameterized EM allows faster convergence to the maximum likelihood parameters than does standard EM. 1 Introduction Over the past few years, there has been a growing interest in the problem of le...
Learning Probabilistic Networks
 THE KNOWLEDGE ENGINEERING REVIEW
, 1998
"... A probabilistic network is a graphical model that encodes probabilistic relationships between variables of interest. Such a model records qualitative influences between variables in addition to the numerical parameters of the probability distribution. As such it provides an ideal form for combini ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
A probabilistic network is a graphical model that encodes probabilistic relationships between variables of interest. Such a model records qualitative influences between variables in addition to the numerical parameters of the probability distribution. As such it provides an ideal form for combining prior knowledge, which might be limited solely to experience of the influences between some of the variables of interest, and data. In this paper, we first show how data can be used to revise initial estimates of the parameters of a model. We then progress to showing how the structure of the model can be revised as data is obtained. Techniques for learning with incomplete data are also covered.
Gradient descent training of Bayesian networks
 Proceedings of the Fifth European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU
, 1999
"... As shown by Russel et al., 1995 [7], Bayesian networks can be equipped with a gradient descent learning method similar to the training method for neural networks. The calculation of the required gradients can be performed locally along with propagation. We review how this can be done, and we show h ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
As shown by Russel et al., 1995 [7], Bayesian networks can be equipped with a gradient descent learning method similar to the training method for neural networks. The calculation of the required gradients can be performed locally along with propagation. We review how this can be done, and we show how the gradient descent approach can be used for various tasks like tuning and training with training sets of definite as well as nondefinite classifications. We introduce tools for resistance and damping to guide the direction of convergence, and we use them for a new adaptation method which can also handle situations where parameters in the network covary.
A Method of Learning Implication Networks from Empirical Data: Algorithm and MonteCarlo Simulation Based Validation
 IEEE Transactions on Knowledge and Data Engineering
, 1997
"... This paper describes an algorithmic means for inducing implication networks from empirical data samples. The induced network enables efficient inferences about the values of network nodes if certain observations are made. This implication induction method is approximate in nature as probablistic net ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
This paper describes an algorithmic means for inducing implication networks from empirical data samples. The induced network enables efficient inferences about the values of network nodes if certain observations are made. This implication induction method is approximate in nature as probablistic network requirements are relaxed in the construction of dependence relationships based on statistical testing. In order to examine the effectiveness and validity of the induction method, several MonteCarlo simulations were conducted where theoretical Bayesian networks were used to generate empirical data samples \Gamma some of which were used to induce implication relations whereas others were used to verify the results of evidential reasoning with the induced networks. The values in the implication networks were predicted by applying a modified version of DempsterShafer belief updating scheme. The results of predictions were, furthermore, compared to the ones generated by Pearl's stochastic ...
Challenge: Where is the Impact of Bayesian Networks in Learning?
 In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence
, 1997
"... Bayesian networks are graphical representations of probability distributions. Over the last decade, these representations have become the method of choice for representation of uncertainly in artificial intelligence. Today, they play a crucial role in modern expert systems, diagnosis engines, and de ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Bayesian networks are graphical representations of probability distributions. Over the last decade, these representations have become the method of choice for representation of uncertainly in artificial intelligence. Today, they play a crucial role in modern expert systems, diagnosis engines, and decision support systems. In recent years, there has been much interest in learning Bayesian networks from data. Learning such models is desirable simply because there is a wide array of offtheshelf tools that can apply the learned models as described above. Practitioners also claim that adaptive Bayesian networks have advantages in their own right as a nonparametric method for density estimation, data analysis, pattern classification, and modeling. Among the reasons cited we find: their semantic clarity and understandability by humans, the ease of acquisition and incorporation of prior knowledge, the ease of integration with optimal decisionmaking methods, the possibility of causal interp...