Results 1  10
of
20
A Guide to the Literature on Learning Probabilistic Networks From Data
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Abstract

Cited by 172 (0 self)
 Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra...
Local Learning in Probabilistic Networks With Hidden Variables
, 1995
"... Probabilistic networks, which provide compact descriptions of complex stochastic relationships among several random variables, are rapidly becoming the tool of choice for uncertain reasoning in artificial intelligence. We show that networks with fixed structure containing hidden variables can be lea ..."
Abstract

Cited by 77 (4 self)
 Add to MetaCart
Probabilistic networks, which provide compact descriptions of complex stochastic relationships among several random variables, are rapidly becoming the tool of choice for uncertain reasoning in artificial intelligence. We show that networks with fixed structure containing hidden variables can be learned automatically from data using a gradientdescent mechanism similar to that used in neural networks. We also extend the method to networks with intensionally represented distributions, including networks with continuous variables and dynamic probabilistic networks. Because probabilistic networks provide explicit representations of causal structure, human experts can easily contribute prior knowledge to the training process, thereby significantly improving the learning rate. Adaptive probabilistic networks (APNs) may soon compete directly with neural networks as models in computational neuroscience as well as in industrial and financial applications. 1 Introduction Intelligent systems, ...
Advanced visual surveillance using bayesian networks
 in IEEE International Conference on Computer Vision
, 1995
"... Advanced visual surveillance systems not only need to track moving objects but also interpret their patterns of behaviour. This means that solving the information integration problem becomes very important. We use conceptual knowledge of both the scene and the visual task to provide constraints. We ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
Advanced visual surveillance systems not only need to track moving objects but also interpret their patterns of behaviour. This means that solving the information integration problem becomes very important. We use conceptual knowledge of both the scene and the visual task to provide constraints. We also control the system using dynamic attention and selective processing. Bayesian belief network (BBN) techniques support this as well as allowing us to model dynamic dependencies between parameters involved in visual interpretation. We illustrate these arguments using experimental results fromatra c surveillance application. In particular, we show that using expectations of object trajectory, size and speed for the particular scene can improve robustness and sensitivity in dynamic tracking and segmentation. We also show that behavioural evaluation under attentional control can be achieved using a combination of a static BBN tasknet and dynamic network (DBN). The causal structure ofthese networks provides a framework for the design and integration of advanced vision systems. 1
Learning Probabilistic Networks
 THE KNOWLEDGE ENGINEERING REVIEW
, 1998
"... A probabilistic network is a graphical model that encodes probabilistic relationships between variables of interest. Such a model records qualitative influences between variables in addition to the numerical parameters of the probability distribution. As such it provides an ideal form for combini ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
A probabilistic network is a graphical model that encodes probabilistic relationships between variables of interest. Such a model records qualitative influences between variables in addition to the numerical parameters of the probability distribution. As such it provides an ideal form for combining prior knowledge, which might be limited solely to experience of the influences between some of the variables of interest, and data. In this paper, we first show how data can be used to revise initial estimates of the parameters of a model. We then progress to showing how the structure of the model can be revised as data is obtained. Techniques for learning with incomplete data are also covered.
Generative Models for Learning and Understanding Dynamic Scene Activity
 in ECCV Workshop on Generative Model Based Vision
, 2002
"... We are entering an era of more intelligent cognitive vision systems. Such systems can analyse activity in dynamic scenes to compute conceptual descriptions from motion trajectories of moving people and the objects they interact with. Here we review progress in the development of flexible, generative ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We are entering an era of more intelligent cognitive vision systems. Such systems can analyse activity in dynamic scenes to compute conceptual descriptions from motion trajectories of moving people and the objects they interact with. Here we review progress in the development of flexible, generative models that can explain visual input as a combination of hidden variables and can adapt to new types of input. Such models are particularly appropriate for the tasks posed by cognitive vision as they incorporate learning as well as having sufficient structure to represent a general class of problems. In addition, generative models explain all aspects of the input rather than attempting to ignore irrelevant sources of variation as in exemplarbased learning. Applications of these models in visual interaction for education, smart rooms and cars, as well as surveillance systems is also briefly reviewed.
Using New Data to Refine a Bayesian Network
 In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence
, 1994
"... We explore the issue of refining an existent Bayesian network structure using new data which might mention only a subset of the variables. Most previous works have only considered the refinement of the network 's conditional probability parameters, and have not addressed the issue of refining the ne ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We explore the issue of refining an existent Bayesian network structure using new data which might mention only a subset of the variables. Most previous works have only considered the refinement of the network 's conditional probability parameters, and have not addressed the issue of refining the network's structure. We develop a new approach for refining the network's structure. Our approach is based on the Minimal Description Length (MDL) principle, and it employs an adapted version of a Bayesian network learning algorithm developed in our previous work. One of the adaptations required is to modify the previous algorithm to account for the structure of the existent network. The learning algorithm generates a partial network structure which can then be used to improve the existent network. We also present experimental evidence demonstrating the effectiveness of our approach. 1 Introduction A number of errors and inaccuracies can occur during the construction of a Bayesian net. For ex...
Bayesian Nets for Mapping Contextual Knowledge to Computational Constraints in Motion Segmentation and Tracking
 in British Machine Vision Conference
, 1993
"... In this work we address the issue of focused computation in computer vision for effectiveness and efficiency. In particular, we propose a scheme for motion segmentation and tracking that links sceneoriented contextual knowledge with the computational constraints involved. Such an approach enhances ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
In this work we address the issue of focused computation in computer vision for effectiveness and efficiency. In particular, we propose a scheme for motion segmentation and tracking that links sceneoriented contextual knowledge with the computational constraints involved. Such an approach enhances sensitivity to visual evidence and gives the selectivity we require. The approach uses Bayesian belief revision techniques to map explicit scene knowledge onto implicit causal dependent constraints in controlling computational parameters used in motion segmentation and tracking. We show experimental results from applying this method in improving existing techniques in traffic surveillance applications. 1 Introduction In the past, research in computer vision was greatly influenced by the theory of David Marr [17]. Visual processing modules in the Marr framework operate at different levels of abstraction, such as edge detection, surface reconstruction and model matching. Typically, then, a hi...
A method for learning belief networks that contain hidden variables
 in Proceedings of the Workshop on Knowledge Discovery in Databases
, 1994
"... This paper presents a Bayesian method for computing the probability of a Bayesian beliefnetwork structure from a database. In particular, the paper focuses on computing the probability of a beliefnetwork structure that contains e. hidden (latent) variable. A hidden variable represents a postulated ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
This paper presents a Bayesian method for computing the probability of a Bayesian beliefnetwork structure from a database. In particular, the paper focuses on computing the probability of a beliefnetwork structure that contains e. hidden (latent) variable. A hidden variable represents a postulated entity about which we have no data. For example, we may wish to postulate the existence of a hidden
Selfaware services: Using bayesian networks for detecting anomalies in internetbased services
 Northwestern University and Stanford University. Gary (Igor
, 2001
"... service management, anomaly detection, Bayesian networks, online learning, fault and performance management We propose a general architecture and implementation for the autonomous assessment of health of arbitrary service elements, as a necessary prerequisite to selfcontrol. We describe a health en ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
service management, anomaly detection, Bayesian networks, online learning, fault and performance management We propose a general architecture and implementation for the autonomous assessment of health of arbitrary service elements, as a necessary prerequisite to selfcontrol. We describe a health engine, the central component of our proposed ‘SelfAwareness and Control ’ architecture. The health engine combines domain independent statistical analysis and probabilistic reasoning technology (Bayesian networks) with domain dependent measurement collection and evaluation methods. The resultant probabilistic assessment enables open, nonhierarchical communications about service element health. We demonstrate the validity of our approach using HP's corporate email service and detecting email anomalies: mail loops and a virus attack. We also present the results of applying online machine learning to this architecture and quantify the benefits of the Bayesian network layer.
Adaptive Online Learning of Bayesian Network Parameters
, 2001
"... The paper introduces Voting EM, an adaptive online learning algorithm of Bayesian network parameters. Voting EM is an extension of the EM( ) algorithm suggested by [1]. We show convergence properties of the Voting EM that uses a constant learning rate. We use the convergence properties to formulate ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The paper introduces Voting EM, an adaptive online learning algorithm of Bayesian network parameters. Voting EM is an extension of the EM( ) algorithm suggested by [1]. We show convergence properties of the Voting EM that uses a constant learning rate. We use the convergence properties to formulate an error driven scheme for adapting the learning rate. The resultant algorithm converges with the optimal rate of near a maximum while retaining the ability to increase the learning rate in the vicinity of a local maximum or due to changes in the modelled environment.