Results 1  10
of
18
Sequential Update of Bayesian Network Structure
 In Proc. 13th Conference on Uncertainty in Artificial Intelligence (UAI’97
, 1997
"... There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a f ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a fixed structure can be accomplished using standard techniques, sequential update of network structure is still an open problem. In this paper, we investigate sequential update of Bayesian networks were both parameters and structure are expected to change. We introduce a new approach that allows for the flexible manipulation of the tradeoff between the quality of the learned networks and the amount of information that is maintained about past observations. We formally describe our approach including the necessary modifications to the scoring functions for learning Bayesian networks, evaluate its effectiveness through and empirical study, and extend it to the case of missing data. 1 Introductio...
Learning Causal Networks from Data: A survey and a new algorithm for recovering possibilistic causal networks
, 1997
"... Introduction Reasoning in terms of cause and effect is a strategy that arises in many tasks. For example, diagnosis is usually defined as the task of finding the causes (illnesses) from the observed effects (symptoms). Similarly, prediction can be understood as the description of a future plausible ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Introduction Reasoning in terms of cause and effect is a strategy that arises in many tasks. For example, diagnosis is usually defined as the task of finding the causes (illnesses) from the observed effects (symptoms). Similarly, prediction can be understood as the description of a future plausible situation where observed effects will be in accordance with the known causal structure of the phenomenon being studied. Causal models are a summary of the knowledge about a phenomenon expressed in terms of causation. Many areas of the ap # This work has been partially supported by the Spanish Comission Interministerial de Ciencia y Tecnologia Project CICYTTIC96 0878. plied sciences (econometry, biomedics, engineering, etc.) have used such a term to refer to models that yield explanations, allow for prediction and facilitate planning and decision making. Causal reasoning can be viewed as inference guided by a causation theory. That kind of inference can be further specialised into induc
Theory refinement of bayesian networks with hidden variables
 In Machine Learning: Proceedingsof the International Conference
, 1998
"... Copyright by ..."
Knowing and reasoning in
 in College: Gender Related Patterns in Student’s Intellectual Development
, 1992
"... Modelling a decision support system for ..."
Integrating Bayesian Networks into KnowledgeIntensive CBR
 Proceedings of AAAI Workshop on CBR Integration
, 1998
"... In this paper we propose an approach to knowledge intensive CBR, where explanations are generated from a domain model consisting partly of a semantic network and partly of a Bayesian network (BN). The BN enables learning within this domain model based on the observed data. The domain model is used t ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
In this paper we propose an approach to knowledge intensive CBR, where explanations are generated from a domain model consisting partly of a semantic network and partly of a Bayesian network (BN). The BN enables learning within this domain model based on the observed data. The domain model is used to focus the retrieval and reuse of past cases, as well as the indexing when learning a new case. Essentially, the BNpowered submodel works in parallel with the semantic network model to generate a statistically sound contribution to case indexing, retrieval and explanation. 1. Introduction and
An approach to online Bayesian learning from multiple data streams
 In Proceedings of Workshop on Mobile and Distributed Data Mining, PKDD ’01
, 2001
"... Abstract. We present a collective approach to mine Bayesian networks from distributed heterogenous weblog data streams. In this approach we first learn a local Bayesian network at each site using the local data. Then each site identifies the observations that are most likely to be evidence of coupl ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Abstract. We present a collective approach to mine Bayesian networks from distributed heterogenous weblog data streams. In this approach we first learn a local Bayesian network at each site using the local data. Then each site identifies the observations that are most likely to be evidence of coupling between local and nonlocal variables and transmits a subset of these observations to a central site. Another Bayesian network is learnt at the central site using the data transmitted from the local site. The local and central Bayesian networks are combined to obtain a collective Bayesian network, that models the entire data. This technique is then suitably adapted to an online Bayesian learning technique, where the network parameters are updated sequentially based on new data from multiple streams. We applied this technique to mine multiple data streams where data centralization is difficult because of large response time and scalability issues. This approach is particularly suitable for mining applications with distributed sources of data streams in an environment with nonzero communication cost (e.g. wireless networks). Experimental results and theoretical justification that demonstrate the feasibility of our approach are presented. 1
Incremental Methods for Bayesian Network Learning
 Department de
, 1999
"... In this work we analyze the most relevant, in our opinion, algorithms for learning Bayesian Networks. We analyze methods that use goodnessoffit tests between tentative networks and data. Within this sort of learning algorithms we distinguish batch and incremental methods. Finally, we propose a sys ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In this work we analyze the most relevant, in our opinion, algorithms for learning Bayesian Networks. We analyze methods that use goodnessoffit tests between tentative networks and data. Within this sort of learning algorithms we distinguish batch and incremental methods. Finally, we propose a system, called BANDOLER, that incrementally learns Bayesian Networks from data and prior knowledge. The incremental fashion of the system allows to modify the learning strategy and to introduce new prior knowledge during the learning process in the light of the already learnt structure. 1 Introduction The aim of this work is twofold. On the one hand, we introduce the state of the art on learning Bayesian networks. It is intended to be a tutorial on the learning methods based on goodnessoffit tests. We present the most significant, in our opinion, learning algorithms found in the literature, as well as the theory they are based on. On the other hand, we propose a research framework. The fiel...
Adapting Bayes Network Structures to Nonstationary Domains
"... When an incremental structural learning method gradually modifies a Bayesian network (BN) structure to fit observations, as they are read from a database, we call the process structural adaptation. Structural adaptation is useful when the learner is set to work in an unknown environment, where a BN ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
When an incremental structural learning method gradually modifies a Bayesian network (BN) structure to fit observations, as they are read from a database, we call the process structural adaptation. Structural adaptation is useful when the learner is set to work in an unknown environment, where a BN is to be gradually constructed as observations of the environment are made. Existing algorithms for incremental learning assume that the samples in the database have been drawn from a single underlying distribution. In this paper we relax this assumption, so that the underlying distribution can change during the sampling of the database. The method that we present can thus be used in unknown environments, where it is not even known whether the dynamics of the environment are stable. We briefly state formal correctness results for our method, and demonstrate its feasibility experimentally. 1
Bayesian Graphical Models
, 2000
"... ions and Land use have an impact on River flow. The essential property concerning the structural model is that it reflects conditional independence relations. Definition: Two variables A and B are independent if knowledge of A does not change the belief about B (and vice versa). A and B are condit ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
ions and Land use have an impact on River flow. The essential property concerning the structural model is that it reflects conditional independence relations. Definition: Two variables A and B are independent if knowledge of A does not change the belief about B (and vice versa). A and B are conditionally independent 1 Level of domestic consumption Industrial output River flow Rainfall Land use Industrial abstractions abstractions Domestic Abstractions Agricultural abstractions Percentage of agricultural land irrigated Figure 1: Example of a simple Bayesian network representing a river basin. Abstraction and River flow have 7 states, the other variables have 4 states given C if they are independent whenever the state of C is known. In the language of probabilities, conditional independence is defined as PR(A j B; C) = PR(A j C) (1) The definition of conditional independence is in a straightforward way generalized to sets of variables. The structural part of a Bayes...
Adapting Bayes Nets to Nonstationary Probability Distributions
, 2006
"... this paper consists of incrementally learning a BN, while receiving a piecewise DAG faithful sequence of samples, and making sure that after reception of each sample point the BN structure is as close as possible to the distribution that generated this point. Formally, let d be a complete piecewise ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this paper consists of incrementally learning a BN, while receiving a piecewise DAG faithful sequence of samples, and making sure that after reception of each sample point the BN structure is as close as possible to the distribution that generated this point. Formally, let d be a complete piecewise DAG faithful sample sequence of length l, and let P t be the distribution generating sample point t. Furthermore, let l be BNs output (or maintained) by a structural adaptation method M , when receiving d and starting with BN B