Results 1 
4 of
4
The Unified propagation and scaling algorithm
 In Advances in Neural Information Processing Systems
, 2002
"... In this paper we will show that a restricted class of constrained minimum divergence problems, named generalized inference problems, can be solved by approximating the KL divergence with a Bethe free energy. The algorithm we derive is closely related to both loopy belief propagation and iterative sc ..."
Abstract

Cited by 42 (8 self)
 Add to MetaCart
In this paper we will show that a restricted class of constrained minimum divergence problems, named generalized inference problems, can be solved by approximating the KL divergence with a Bethe free energy. The algorithm we derive is closely related to both loopy belief propagation and iterative scaling. This unified propagation and scaling algorithm reduces to a convergent alternative to loopy belief propagation when no constraints are present. Experiments show the viability of our algorithm. 1
Passing And Bouncing Messages For Generalized Inference
, 2001
"... this paper we will propose an algorithm which combines BP and IPF into one message 1 More precisely, the fixed points of loopy BP are the stationary points of Bethe free energy. However, in practise it turns out that it converges to minima 1 passing algorithm on a tree. The new set of messages re ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
this paper we will propose an algorithm which combines BP and IPF into one message 1 More precisely, the fixed points of loopy BP are the stationary points of Bethe free energy. However, in practise it turns out that it converges to minima 1 passing algorithm on a tree. The new set of messages reduces to BP messages on the internal nodes of the tree. However, when a messages reaches a constrained marginal (other than a "hard evidence" node), it will be "bounced back", while being changed in the process. When the constrained marginal is a delta function (hard observation), the returned message is independent of the incoming message in the usual way. The algorithm requires a scheduling of messages such that the information from a bounced message has reached the node where the next bounce takes place. Unlike BP on a tree (but like IPF on a tree), the combined algorithm does not converge within a finite number of iterations
Passing And Bouncing Messages For Generalized Inference
, 2001
"... Inference on general loopy graphs is a NP hard problem. Many approximate methods, like Monte carlo sampling and variational approximations have become available over the last decades, each with its own advantages and disadvantages. However, when the graphical structure is a tree, there is an algorit ..."
Abstract
 Add to MetaCart
Inference on general loopy graphs is a NP hard problem. Many approximate methods, like Monte carlo sampling and variational approximations have become available over the last decades, each with its own advantages and disadvantages. However, when the graphical structure is a tree, there is an algorithm for doing inference that is only linear in the number
On Improving the Efficiency of the Iterative Proportional Fitting Procedure
"... Iterative proportional fitting (IPF) on junction trees is an important tool for learning in graphical models. We identify the propagation and IPF updates on the junction tree as fixed point equations of a single constrained entropy maximization problem. This allows a more efficient message updating ..."
Abstract
 Add to MetaCart
Iterative proportional fitting (IPF) on junction trees is an important tool for learning in graphical models. We identify the propagation and IPF updates on the junction tree as fixed point equations of a single constrained entropy maximization problem. This allows a more efficient message updating protocol than the well known effective IPF of Jirouˇsek and Pˇreučil (1995). When the junction tree has an intractably large maximum clique size we propose to maximize an approximate constrained entropy based on region graphs (Yedidia et al., 2002). To maximize the new objective we propose a “loopy” version of IPF. We show that this yields accurate estimates of the weights of undirected graphical models in a simple experiment. 1