Results 1  10
of
22
Sequential Update of Bayesian Network Structure
 In Proc. 13th Conference on Uncertainty in Artificial Intelligence (UAI’97
, 1997
"... There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a f ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
(Show Context)
There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a fixed structure can be accomplished using standard techniques, sequential update of network structure is still an open problem. In this paper, we investigate sequential update of Bayesian networks were both parameters and structure are expected to change. We introduce a new approach that allows for the flexible manipulation of the tradeoff between the quality of the learned networks and the amount of information that is maintained about past observations. We formally describe our approach including the necessary modifications to the scoring functions for learning Bayesian networks, evaluate its effectiveness through and empirical study, and extend it to the case of missing data. 1 Introductio...
Learning Causal Networks from Data: A survey and a new algorithm for recovering possibilistic causal networks
, 1997
"... Introduction Reasoning in terms of cause and effect is a strategy that arises in many tasks. For example, diagnosis is usually defined as the task of finding the causes (illnesses) from the observed effects (symptoms). Similarly, prediction can be understood as the description of a future plausible ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Introduction Reasoning in terms of cause and effect is a strategy that arises in many tasks. For example, diagnosis is usually defined as the task of finding the causes (illnesses) from the observed effects (symptoms). Similarly, prediction can be understood as the description of a future plausible situation where observed effects will be in accordance with the known causal structure of the phenomenon being studied. Causal models are a summary of the knowledge about a phenomenon expressed in terms of causation. Many areas of the ap # This work has been partially supported by the Spanish Comission Interministerial de Ciencia y Tecnologia Project CICYTTIC96 0878. plied sciences (econometry, biomedics, engineering, etc.) have used such a term to refer to models that yield explanations, allow for prediction and facilitate planning and decision making. Causal reasoning can be viewed as inference guided by a causation theory. That kind of inference can be further specialised into induc
Theory refinement of bayesian networks with hidden variables
 In Machine Learning: Proceedingsof the International Conference
, 1998
"... Copyright by ..."
Integrating Bayesian Networks into KnowledgeIntensive CBR
 Proceedings of AAAI Workshop on CBR Integration
, 1998
"... In this paper we propose an approach to knowledge intensive CBR, where explanations are generated from a domain model consisting partly of a semantic network and partly of a Bayesian network (BN). The BN enables learning within this domain model based on the observed data. The domain model is used t ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
In this paper we propose an approach to knowledge intensive CBR, where explanations are generated from a domain model consisting partly of a semantic network and partly of a Bayesian network (BN). The BN enables learning within this domain model based on the observed data. The domain model is used to focus the retrieval and reuse of past cases, as well as the indexing when learning a new case. Essentially, the BNpowered submodel works in parallel with the semantic network model to generate a statistically sound contribution to case indexing, retrieval and explanation. 1. Introduction and
Incremental activity modelling in multiple disjoint cameras
 IEEE Transactions on Pattern Analysis and Machine Intelligence
"... Abstract—Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as contextincoherent patterns through incremental learning of time ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
Abstract—Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as contextincoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TDPGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station. Index Terms—Unusual event detection, multicamera activity modeling, time delay estimation, incremental structure learning. Ç 1
An approach to online Bayesian learning from multiple data streams
 In Proceedings of Workshop on Mobile and Distributed Data Mining, PKDD ’01
, 2001
"... Abstract. We present a collective approach to mine Bayesian networks from distributed heterogenous weblog data streams. In this approach we first learn a local Bayesian network at each site using the local data. Then each site identifies the observations that are most likely to be evidence of coupl ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a collective approach to mine Bayesian networks from distributed heterogenous weblog data streams. In this approach we first learn a local Bayesian network at each site using the local data. Then each site identifies the observations that are most likely to be evidence of coupling between local and nonlocal variables and transmits a subset of these observations to a central site. Another Bayesian network is learnt at the central site using the data transmitted from the local site. The local and central Bayesian networks are combined to obtain a collective Bayesian network, that models the entire data. This technique is then suitably adapted to an online Bayesian learning technique, where the network parameters are updated sequentially based on new data from multiple streams. We applied this technique to mine multiple data streams where data centralization is difficult because of large response time and scalability issues. This approach is particularly suitable for mining applications with distributed sources of data streams in an environment with nonzero communication cost (e.g. wireless networks). Experimental results and theoretical justification that demonstrate the feasibility of our approach are presented. 1
Adapting Bayes Network Structures to Nonstationary Domains
"... When an incremental structural learning method gradually modifies a Bayesian network (BN) structure to fit observations, as they are read from a database, we call the process structural adaptation. Structural adaptation is useful when the learner is set to work in an unknown environment, where a BN ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
When an incremental structural learning method gradually modifies a Bayesian network (BN) structure to fit observations, as they are read from a database, we call the process structural adaptation. Structural adaptation is useful when the learner is set to work in an unknown environment, where a BN is to be gradually constructed as observations of the environment are made. Existing algorithms for incremental learning assume that the samples in the database have been drawn from a single underlying distribution. In this paper we relax this assumption, so that the underlying distribution can change during the sampling of the database. The method that we present can thus be used in unknown environments, where it is not even known whether the dynamics of the environment are stable. We briefly state formal correctness results for our method, and demonstrate its feasibility experimentally. 1
Incremental Methods for Bayesian Network Learning
 Department de
, 1999
"... In this work we analyze the most relevant, in our opinion, algorithms for learning Bayesian Networks. We analyze methods that use goodnessoffit tests between tentative networks and data. Within this sort of learning algorithms we distinguish batch and incremental methods. Finally, we propose a sys ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In this work we analyze the most relevant, in our opinion, algorithms for learning Bayesian Networks. We analyze methods that use goodnessoffit tests between tentative networks and data. Within this sort of learning algorithms we distinguish batch and incremental methods. Finally, we propose a system, called BANDOLER, that incrementally learns Bayesian Networks from data and prior knowledge. The incremental fashion of the system allows to modify the learning strategy and to introduce new prior knowledge during the learning process in the light of the already learnt structure. 1 Introduction The aim of this work is twofold. On the one hand, we introduce the state of the art on learning Bayesian networks. It is intended to be a tutorial on the learning methods based on goodnessoffit tests. We present the most significant, in our opinion, learning algorithms found in the literature, as well as the theory they are based on. On the other hand, we propose a research framework. The fiel...
Possibilistic Conditional Independence: A similaritybased measure and its application to causal network learning
, 1998
"... ..."
Incremental Learning of Bayesian Networks with Hidden Variables
"... In this paper, an incremental method for learning Bayesian networks based on evolutionary computing, IEMA, is put forward. IEMA introduces the evolutionary algorithm and EM algorithm into the process of incremental learning, can not only avoid getting into local maxima, but also incrementally learn ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper, an incremental method for learning Bayesian networks based on evolutionary computing, IEMA, is put forward. IEMA introduces the evolutionary algorithm and EM algorithm into the process of incremental learning, can not only avoid getting into local maxima, but also incrementally learn Bayesian networks with high accuracy in presence of missing values and hidden variables. In addition, we improved the incremental learning process by Friedman et al. The experimental results verified the validity of IEMA. In terms of storage cost, IEMA is comparable with the incremental learning method of Friedman et al, while it is more accurate. 1.