Results 1  10
of
16
Loopy Belief Propagation for Approximate Inference: An Empirical Study
 In Proceedings of Uncertainty in AI
, 1999
"... Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performa ..."
Abstract

Cited by 479 (18 self)
 Add to MetaCart
Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performance of "Turbo Codes"  codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something special about the errorcorrecting code context, or does loopy propagation work as an approximate inference scheme in a more general setting? We compare the marginals computed using loopy propagation to the exact ones in four Bayesian network architectures, including two realworld networks: ALARM and QMR. We find that the loopy beliefs often converge and when they do, they give a good approximation to the correct marginals. However, on the QMR network, the loopy beliefs oscillated and had no obvious relationship ...
Tutorial on Variational Approximation Methods
 In Advanced Mean Field Methods: Theory and Practice
, 2000
"... We provide an introduction to the theory and use of variational methods for inference and estimation in the context of graphical models. Variational methods become useful as ecient approximate methods when the structure of the graph model no longer admits feasible exact probabilistic calculations. T ..."
Abstract

Cited by 74 (1 self)
 Add to MetaCart
We provide an introduction to the theory and use of variational methods for inference and estimation in the context of graphical models. Variational methods become useful as ecient approximate methods when the structure of the graph model no longer admits feasible exact probabilistic calculations. The emphasis of this tutorial is on illustrating how inference and estimation problems can be transformed into variational form along with describing the resulting approximation algorithms and their properties insofar as these are currently known. 1 Introduction The term variational methods refers to a large collection of optimization techniques. The classical context for these methods involves nding the extremum of an integral depending on an unknown function and its derivatives. This classical de nition, however, and the accompanying calculus of variation no longer adequately characterizes modern variational methods. Modern variational approaches have become indispensable tools in...
Finding the m most probable configurations using loopy belief propagation
 In NIPS 16
, 2004
"... Loopy belief propagation (BP) has been successfully used in a number of difficult graphical models to find the most probable configuration of the hidden variables. In applications ranging from protein folding to image analysis one would like to find not just the best configuration but rather the top ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
Loopy belief propagation (BP) has been successfully used in a number of difficult graphical models to find the most probable configuration of the hidden variables. In applications ranging from protein folding to image analysis one would like to find not just the best configuration but rather the top M. While this problem has been solved using the junction tree formalism, in many real world problems the clique size in the junction tree is prohibitively large. In this work we address the problem of finding the M best configurations when exact inference is impossible. We start by developing a new exact inference algorithm for calculating the best configurations that uses only maxmarginals. For approximate inference, we replace the maxmarginals with the beliefs calculated using maxproduct BP and generalized BP. We show empirically that the algorithm can accurately and rapidly approximate the M best configurations in graphs with hundreds of variables. 1
NormProduct Belief Propagation: PrimalDual MessagePassing for Approximate Inference
, 2008
"... Inference problems in graphical models can be represented as a constrained optimization of a free energy function. In this paper we treat both forms of probabilistic inference, estimating marginal probabilities of the joint distribution and finding the most probable assignment, through a unified me ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
Inference problems in graphical models can be represented as a constrained optimization of a free energy function. In this paper we treat both forms of probabilistic inference, estimating marginal probabilities of the joint distribution and finding the most probable assignment, through a unified messagepassing algorithm architecture. In particular we generalize the Belief Propagation (BP) algorithms of sumproduct and maxproduct and treerewaighted (TRW) sum and max product algorithms (TRBP) and introduce a new set of convergent algorithms based on ”convexfreeenergy” and LinearProgramming (LP) relaxation as a zerotemprature of a convexfreeenergy. The main idea of this work arises from taking a general perspective on the existing BP and TRBP algorithms while observing that they all are reductions from the basic optimization formula of f + ∑ i hi
Efficient Test Selection in Active Diagnosis via Entropy Approximation
 In Proceedings of UAI05
, 2005
"... We consider the problem of diagnosing faults in a system represented by a Bayesian network, where diagnosis corresponds to recovering the most likely state of unobserved nodes given the outcomes of tests (observed nodes). Finding an optimal subset of tests in this setting is intractable in general. ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We consider the problem of diagnosing faults in a system represented by a Bayesian network, where diagnosis corresponds to recovering the most likely state of unobserved nodes given the outcomes of tests (observed nodes). Finding an optimal subset of tests in this setting is intractable in general. We show that it is difficult even to compute the next mostinformative test using greedy test selection, as it involves several entropy terms whose exact computation is intractable. We propose an approximate approach that utilizes the loopy belief propagation infrastructure to simultaneously compute approximations of marginal and conditional entropies on multiple subsets of nodes. We apply our method to fault diagnosis in computer networks, and show the algorithm to be very effective on realistic Internetlike topologies. We also provide theoretical justification for the greedy test selection approach, along with some performance guarantees. 1
Understanding the Scalability of Bayesian Network Inference using Clique Tree Growth Curves
"... Bayesian networks (BNs) are used to represent and ef ciently compute with multivariate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagati ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Bayesian networks (BNs) are used to represent and ef ciently compute with multivariate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NPhard or worse to answer, and it is not hard to nd application BNs where the clique tree approach in practice cannot be used. On the other hand, it is wellknown that treestructured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, speci cally: (i) the ratio of the number of a BN's nonroot nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and nd that root clique growth is wellapproximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical tradeoff studies of clique tree clustering using growth curves.
Meanfield methods for a special class of Belief Networks
 Journal of Artificial Intelligence
, 2001
"... The chief aim of this paper is to propose meanfield approximations for a broad class of Belief networks, of which sigmoid and noisyor networks can be seen as special cases. The approximations are based on a powerful meanfield theory suggested by Plefka. We show that Saul, Jaakkola, and Jordan&apo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The chief aim of this paper is to propose meanfield approximations for a broad class of Belief networks, of which sigmoid and noisyor networks can be seen as special cases. The approximations are based on a powerful meanfield theory suggested by Plefka. We show that Saul, Jaakkola, and Jordan's approach is the first order approximation in Plefka 's approach, via a variational derivation. The application of Plefka's theory to belief networks is not computationally tractable. To tackle this problem we propose new approximations based on Taylor series. Small scale experiments show that the proposed schemes are attractive. 1.
Fast variational inference for largescale internet diagnosis
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 20
"... Web servers on the Internet need to maintain high reliability, but the cause of intermittent failures of web transactions is nonobvious. We use approximate Bayesian inference to diagnose problems with web services. This diagnosis problem is far larger than any previously attempted: it requires infe ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Web servers on the Internet need to maintain high reliability, but the cause of intermittent failures of web transactions is nonobvious. We use approximate Bayesian inference to diagnose problems with web services. This diagnosis problem is far larger than any previously attempted: it requires inference of 10 4 possible faults from 10 5 observations. Further, such inference must be performed in less than a second. Inference can be done at this speed by combining a meanfield variational approximation and the use of stochastic gradient descent to optimize a variational cost function. We use this fast inference to diagnose a time series of anomalous HTTP requests taken from a real web service. The inference is fast enough to analyze network logs with billions of entries in a matter of hours.
Informationtheoretic approaches to costefficient diagnosis, in
 Proc. Information Theory and Applications Inaugural Work
, 2006
"... Abstract — This paper provides a summary of our recent work on costefficient probabilistic diagnosis in Bayesian networks with applications to fault diagnosis in distributed computer systems. We focus on achieving good tradeoffs between the diagnostic accuracy versus the cost of testing and comput ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract — This paper provides a summary of our recent work on costefficient probabilistic diagnosis in Bayesian networks with applications to fault diagnosis in distributed computer systems. We focus on achieving good tradeoffs between the diagnostic accuracy versus the cost of testing and computational complexity of diagnosis. We present (1) theoretical results characterizing these tradeoffs, such as lower bound on the number of probes necessary to achieve asymptotically errorfree diagnosis, (2) adaptive online approach to selecting mostinformative tests, as well as (3) approximation techniques using ”loopy ” belief propagation for handling intractable inference problems involved in both diagnosis and mostinformative test selection in largescale problems. Empirical results on realistic systems demonstrating the effectiveness of our approaches can be found in [9], [16], [13], [15]. I.
Entropy Approximation for Active Fault Diagnosis
, 2004
"... been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). Copies may be requested from IBM T. J. Watson Research Center, P.