Results 1  10
of
51
Learning lowlevel vision
 International Journal of Computer Vision
, 2000
"... We show a learningbased method for lowlevel vision problems. We setup a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently prop ..."
Abstract

Cited by 468 (25 self)
 Add to MetaCart
We show a learningbased method for lowlevel vision problems. We setup a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently propagate image information. Monte Carlo simulations justify this approximation. We apply this to the \superresolution " problem (estimating high frequency details from a lowresolution image), showing good results. For the motion estimation problem, we show resolution of the aperture problem and llingin arising from application of the same probabilistic machinery.
Loopy Belief Propagation for Approximate Inference: An Empirical Study
 In Proceedings of Uncertainty in AI
, 1999
"... Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performance of "Turbo ..."
Abstract

Cited by 466 (18 self)
 Add to MetaCart
Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performance of "Turbo Codes"  codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something special about the errorcorrecting code context, or does loopy propagation work as an approximate inference scheme in a more general setting? We compare the marginals computed using loopy propagation to the exact ones in four Bayesian network architectures, including two realworld networks: ALARM and QMR. We find that the loopy beliefs often converge and when they do, they give a good approximation to the correct marginals. However, on the QMR network, the loopy beliefs oscillated and had no obvious relationship ...
On the Optimality of Solutions of the MaxProduct Belief Propagation Algorithm in Arbitrary Graphs
, 2001
"... Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tr ..."
Abstract

Cited by 185 (15 self)
 Add to MetaCart
Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixedpoint yields the most probable a posteriori (MAP) values of the unobserved variables given the observed ones. Recently, good
Correctness of Local Probability Propagation in Graphical Models with Loops
, 2000
"... This article analyzes the behavior of local propagation rules in graphical models with a loop. ..."
Abstract

Cited by 178 (9 self)
 Add to MetaCart
This article analyzes the behavior of local propagation rules in graphical models with a loop.
Nonparametric Belief Propagation for SelfCalibration in Sensor Networks
 In Proceedings of the Third International Symposium on Information Processing in Sensor Networks
, 2004
"... Automatic selfcalibration of adhoc sensor networks is a critical need for their use in military or civilian applications. In general, selfcalibration involves the combination of absolute location information (e.g. GPS) with relative calibration information (e.g. time delay or received signal stre ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
Automatic selfcalibration of adhoc sensor networks is a critical need for their use in military or civilian applications. In general, selfcalibration involves the combination of absolute location information (e.g. GPS) with relative calibration information (e.g. time delay or received signal strength between sensors) over regions of the network. Furthermore, it is generally desirable to distribute the computational burden across the network and minimize the amount of intersensor communication. We demonstrate that the information used for sensor calibration is fundamentally local with regard to the network topology and use this observation to reformulate the problem within a graphical model framework. We then demonstrate the utility of nonparametric belief propagation (NBP), a recent generalization of particle filtering, for both estimating sensor locations and representing location uncertainties. NBP has the advantage that it is easily implemented in a distributed fashion, admits a wide variety of statistical models, and can represent multimodal uncertainty. We illustrate the performance of NBP on several example networks while comparing to a previously published nonlinear least squares method.
Minibuckets: A general scheme for bounded inference
 Journal of the ACM (JACM
"... Abstract. This article presents a class of approximation algorithms that extend the idea of boundedcomplexity inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies create ..."
Abstract

Cited by 57 (20 self)
 Add to MetaCart
Abstract. This article presents a class of approximation algorithms that extend the idea of boundedcomplexity inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies created by inference algorithms. This yields a parameterized scheme, called minibuckets, that offers adjustable tradeoff between accuracy and efficiency. The minibucket approach to optimization problems, such as finding the most probable explanation (MPE) in Bayesian networks, generates both an approximate solution and bounds on the solution quality. We present empirical results demonstrating successful performance of the proposed approximation scheme for the MPE task, both on randomly generated problems and on realistic domains such as medical diagnosis and probabilistic decoding.
MiniBuckets: A General Scheme for Approximating Inference
 Journal of ACM
, 1998
"... The paper presents a class of approximation algorithms that extend the idea of bounded inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies created by inference algor ..."
Abstract

Cited by 46 (16 self)
 Add to MetaCart
The paper presents a class of approximation algorithms that extend the idea of bounded inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies created by inference algorithms. This yields a parameterized scheme, called minibuckets, that offers adjustable levels of accuracy and efficiency. The minibucket approach generates both an approximate solution and a bound on the solution quality. We present empirical results demonstrating successful performance of the proposed approximation scheme for probabilistic tasks, both on randomly generated problems and on realistic domains such as medical diagnosis and probabilistic decoding. 1 Introduction Automated reasoning tasks such as constraint satisfaction and optimization, probabilistic inference, decisionmaking, and planning are generally hard (NPhard). One way to cope This work was partially supported...
Maximum weight matching via maxproduct belief propagation
 in International Symposium of Information Theory
, 2005
"... Abstract — The maxproduct “belief propagation ” algorithm is an iterative, local, message passing algorithm for finding the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. Despite the spectacular success of the algorithm in many applicati ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
Abstract — The maxproduct “belief propagation ” algorithm is an iterative, local, message passing algorithm for finding the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. Despite the spectacular success of the algorithm in many application areas such as iterative decoding and computer vision which involve graphs with many cycles, theoretical convergence results are only known for graphs which are treelike or have a single cycle. In this paper, we consider a weighted complete bipartite graph and define a probability distribution on it whose MAP assignment corresponds to the maximum weight matching (MWM) in that graph. We analyze the fixed points of the maxproduct algorithm when run on this graph and prove the surprising result that even though the underlying graph has many short cycles, the maxproduct assignment converges to the correct MAP assignment. We also provide a bound on the number of iterations required by the algorithm. I.
On the Convergence of Iterative Decoding on Graphs with a Single Cycle
 In Proc. 1998 ISIT
, 1998
"... It is now understood [7, 8] that the turbo decoding algorithm is an instance of a probability propagation algorithm (PPA) on a graph with many cycles. However, PPAtype algorithms are known to give exact results only when the underlying graph is cyclefree. Thus it is important to study the "approxi ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
It is now understood [7, 8] that the turbo decoding algorithm is an instance of a probability propagation algorithm (PPA) on a graph with many cycles. However, PPAtype algorithms are known to give exact results only when the underlying graph is cyclefree. Thus it is important to study the "approximate correctness" of PPA on graphs with cycles. In this paper we make a first step by discussing the behavior of an PPA in graphs with a single cycle. This work is directly relevant to the study of iterative decoding of tailbiting codes, whose underlying graph has just one cycle [3], [12]. First, we shall show that for strictly positive local kernels, the iterations of the PPA will always converge to the same fixed point regardless of the scheduling order used. Moreover, the length of the cycle does not play a role in this convergence. Secondly, we shall generalize a result of McEliece and Rodemich [9], by showing that if the hidden variables in the cycle are binaryvalued, a decision based...
Learning to estimate scenes from images
 Adv. Neural Information Processing Systems 11
, 1999
"... We seek the scene interpretation that best explains image data. ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
We seek the scene interpretation that best explains image data.