Results 1  10
of
12
Loopy Belief Propagation for Approximate Inference: An Empirical Study
 In Proceedings of Uncertainty in AI
, 1999
"... Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performa ..."
Abstract

Cited by 495 (17 self)
 Add to MetaCart
(Show Context)
Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performance of "Turbo Codes"  codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something special about the errorcorrecting code context, or does loopy propagation work as an approximate inference scheme in a more general setting? We compare the marginals computed using loopy propagation to the exact ones in four Bayesian network architectures, including two realworld networks: ALARM and QMR. We find that the loopy beliefs often converge and when they do, they give a good approximation to the correct marginals. However, on the QMR network, the loopy beliefs oscillated and had no obvious relationship ...
Control of Selective Perception Using Bayes Nets and Decision Theory
, 1993
"... A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the neces ..."
Abstract

Cited by 107 (1 self)
 Add to MetaCart
A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decisionmaking are central issues for selective vision, which takes advantage of prior knowledge of a domain's abstract and geometrical structure and models for the expected performance and cost of visual operators. The TEA1 selective vision system uses Bayes nets for representation and benefitcost analysis for control of visual and nonvisual actions. It is the highlevel control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA1 demonstrates that Bayes nets and decision theoretic techniques provide a general, reusable framework for constructi...
Decision Analytic Networks in Artificial Intelligence
, 1995
"... Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a fa ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Researchers in artificial intelligence and decision analysis share a concern with the construction of formal models of human knowledge and expertise. Historically, however, their approaches to these problems have diverged. Members of these two communities have recently discovered common ground: a family of graphical models of decision theory known as influence diagrams or as belief networks. These models are equally attractive to theoreticians, decision modelers, and designers of knowledgebased systems. From a theoretical perspective, they combine graph theory, probability theory and decision theory. From an implementation perspective, they lead to powerful automated systems. Although many practicing decision analysts have already adopted influence diagrams as modeling and structuring tools, they may remain unaware of the theoretical work that has emerged from the artificial intelligence community. This paper surveys the first decade or so of this work. Investment Technology Group, ...
A multilayer strategy for 3D building acquisition
 in Proceedings IAPRTC7 Workshop
, 1996
"... Abstract: In various projects we investigate on the extraction of buildings on di erent type and representation of data. This paper presents a strategy for 3D building acquisition which combines different approaches based on di erent levels of description. The approach consists of detectionofregions ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract: In various projects we investigate on the extraction of buildings on di erent type and representation of data. This paper presents a strategy for 3D building acquisition which combines different approaches based on di erent levels of description. The approach consists of detectionofregions of interest and automatic and semiautomatic reconstruction of object parts and complete buildings. We incorporate the approach in a global concept of interaction between scene and sensors for image interpretation. 1
Meanfield methods for a special class of Belief Networks
 Journal of Artificial Intelligence
, 2001
"... The chief aim of this paper is to propose meanfield approximations for a broad class of Belief networks, of which sigmoid and noisyor networks can be seen as special cases. The approximations are based on a powerful meanfield theory suggested by Plefka. We show that Saul, Jaakkola, and Jordan&apo ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
The chief aim of this paper is to propose meanfield approximations for a broad class of Belief networks, of which sigmoid and noisyor networks can be seen as special cases. The approximations are based on a powerful meanfield theory suggested by Plefka. We show that Saul, Jaakkola, and Jordan's approach is the first order approximation in Plefka 's approach, via a variational derivation. The application of Plefka's theory to belief networks is not computationally tractable. To tackle this problem we propose new approximations based on Taylor series. Small scale experiments show that the proposed schemes are attractive. 1.
Behavioural Descriptions From Image Sequences
 In Proceedings of Workshop on Integration of Natural and Vision Processing Language
, 1994
"... This paper reviews research that addresses the problems of extracting descriptions of object behaviour from image sequences. Vision systems are now capable of delivering trajectorybased descriptions of moving objects in a scene but little work has been done on the higherlevel spatiotemporal reaso ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
This paper reviews research that addresses the problems of extracting descriptions of object behaviour from image sequences. Vision systems are now capable of delivering trajectorybased descriptions of moving objects in a scene but little work has been done on the higherlevel spatiotemporal reasoning needed for the computation of behavioural descriptions. This level of understanding, which allows meaningful descriptions of what is happening in a scene, seems to be a prerequisite for communication between users and machine based vision systems. The approaches discussed here can be separated into three main classes: those that treat the problem as an offline querybased process, those that attempt an online modelbased interpretation, and those that adopt a more active vision stategy. Some evidence from the psycholinguistic literature, event perception, and recent developments in reactive planning are brought together to support the proposal that active, purposive frameworks are req...
TaskSpeci c Utility in a General Bayes Net Vision System
"... TEA is a taskoriented computer vision system that uses Bayes nets and a maximum expectedutility decision rule to choose a sequence of taskdependent and opportunistic visual operations on the basis of their cost and (present and future) benet. We discuss technical problems regarding utilities, pr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
TEA is a taskoriented computer vision system that uses Bayes nets and a maximum expectedutility decision rule to choose a sequence of taskdependent and opportunistic visual operations on the basis of their cost and (present and future) benet. We discuss technical problems regarding utilities, present TEA1's utility function (which approximates a twostep lookahead), and compare it to various simpler utility functions in experiments with real and simulated scenes. 1
Bayesian network induction . . .
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 12
, 1999
"... In recent years, Bayesian networks have become highly successful tool for diagnosis, analysis, and decision making in realworld domains. We present an efficient algorithm for learning Bayesian networks from data. Our approach constructs Bayesian networks by first identifying each node's Markov ..."
Abstract
 Add to MetaCart
In recent years, Bayesian networks have become highly successful tool for diagnosis, analysis, and decision making in realworld domains. We present an efficient algorithm for learning Bayesian networks from data. Our approach constructs Bayesian networks by first identifying each node's Markov blankets, then connecting nodes in a consistent way. In contrast to the majority of work, which typically uses hillclimbing approaches that may produce dense nets and incorrect structure, our approach typically yields consistent structure and compact networks by heeding independencies in the data. Compact networks facilitate fast inference and are also easier to understand. We prove that under mild assumptions, our approach requires time polynomial in the size of the data and the number of nodes. A Monte Carlo variant, also presented here, is more robust and yields comparable results at much higher speeds.
Bayesian network induction via . . .
, 2000
"... In recent years, Bayesian networks have become highly successful tool for diagnosis, analysis, and decision making in realworld domains. We present an efficient algorithm for learning Bayes networks from data. Our approach constructs Bayesian networks by first identifying each node’s Markov blanket ..."
Abstract
 Add to MetaCart
(Show Context)
In recent years, Bayesian networks have become highly successful tool for diagnosis, analysis, and decision making in realworld domains. We present an efficient algorithm for learning Bayes networks from data. Our approach constructs Bayesian networks by first identifying each node’s Markov blankets, then connecting nodes in a maximally consistent way. In contrast to the majority of work, which typically uses hillclimbing approaches that may produce dense and causally incorrect nets, our approach yields much more compact causal networks by heeding independencies in the data. Compact causal networks facilitate fast inference and are also easier to understand. We prove that under mild assumptions, our approach requires time polynomial in the size of the data and the number of nodes. A randomized variant, also presented here, yields comparable results at much higher speeds.
Evidence Aggregation in Hierarchical Evidential Reasoning
"... Several reasoning tasks need to scale over the volume of evidence and the entities of interest. A technique used by statisticians is to represent a collection of evidence by the sufficient statistics of the data. This reduces the computation burden on the BN model significantly by representing a lar ..."
Abstract
 Add to MetaCart
Several reasoning tasks need to scale over the volume of evidence and the entities of interest. A technique used by statisticians is to represent a collection of evidence by the sufficient statistics of the data. This reduces the computation burden on the BN model significantly by representing a large set of records (and therefore nodes) with a single node. However, this procedure is not general in terms of data types and model parameters. Similar to the notion of the sufficient statistics, many real world applications of Bayesian Networks use this common pattern of use where a set of evidence is approximated by an aggregate score of the collective. In some applications the approximation is justified by appealing to the sufficient statistics argument. However, in others it is presented only as a heuristic for computational tractability. In this paper we review the pattern of use where a collection is approximated by an aggregate score and the sufficient statistics argument in the context of Bayesian reasoning. Further, we present use of the same technique where the sufficient statistics notion is not applicable. It seems like there needs to be a theoretical justification of the approximation based on a generalization of sufficient statistics. We present two applications where we have used this pattern of use and with remarkable effectiveness. The applications are in (1) cyber security, and (2) resource provisioning in distributed systems. 1