DMCA
Efficient Belief Propagation for Higher Order Cliques Using Linear Constraint Nodes (2008)
Citations: | 8 - 2 self |
Citations
8897 |
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
- Pearl
- 1988
(Show Context)
Citation Context ...� g∈N (i) φf m t−1 g→i (xi) (2) � � � xN (f) j∈N (f)\i m t j→f (xj) dx (3) m t g→i(xi) (4) where f and g are factor nodes, i and j are variable nodes, and N (i) is the set of neighbors of node i (see =-=[10,11,7]-=- for further details on classic belief propagation). The integrand of equation 3 sums over a function whose range is of dimensionality N (i) − 1, and the integral must be evaluated for each value of x... |
5113 | Stochastic relaxation, gibbs distributions, and the bayesian restoration of images
- Geman, Geman
- 1984
(Show Context)
Citation Context ...s associated variables to be equal. Potential functions of three or more variables have similar constraints [13]. These constraints are compatible with many simple models of spatial priors for images =-=[14]-=-. However, many real-world applications require potential functions that violate these constraints, such as the Lambertian constraint in shape-from-shading [9], hard linear constraints of the type dis... |
3482 | Conditional random fields: Probabilistic models for segmenting and labeling sequence data
- Lafferty, McCallum, et al.
- 2001
(Show Context)
Citation Context ...Note that this model is capable of performing denoising in a variety of other noise circumstances, such as non-Gaussian or multiplicative noise. The factor graph in figure 3 is also a conditional MRF =-=[38]-=-, where the observed, noisy pixel values are not explicitly represented as variable nodes. Instead, the Gaussian likelihood potential functions are absorbed into the factor nodes neighboring each pixe... |
2120 | R.: Fast approximate energy minimization via graph cuts
- Boykov, Veksler, et al.
(Show Context)
Citation Context ...for each element of the vector ~xi. Example factor graphs are depicted in figures 2 and 3. One popular method of statistical inference that exploits factorized probability distributions is graph cuts =-=[11]-=-. Graph cuts are a method of estimating the MAP point estimate of a factorized distribution that is based on the max-flow min-cut theorem for graphs. For graph cuts to work, the potential functions in... |
1047 | What energy functions can be minimized via graph cuts
- Kolmogorov, Zabih
- 2004
(Show Context)
Citation Context ... MAP point estimate of a factorized distribution that is based on the max-flow min-cut theorem for graphs. For graph cuts to work, the potential functions in equation 1 must meet a set of constraints =-=[12]-=-. Specifically, each potential function must be regular. For potential functions of two variables, this means that for any three variable states x1, x2, and α, we must have φ(x1, x2) + φ(α, α) ≥ φ(α, ... |
953 | A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics
- Martin, Fowlkes, et al.
- 2001
(Show Context)
Citation Context ...PSNR = 22.11) 26.66 26.09 27.29 27.29 27.40 Table 1 Peak signal-to-noise ratio (in decibels) for pairwise and higher-order models, averaged over the ten images from the Berkeley segmentation database =-=[37]-=- used in [4]. Denoising using linear constraint nodes with 2 × 2 FoEs outperforms both belief propagation on pairwise MRFs and gradient descent on identical FoEs. figure 4b). The object is to remove t... |
676 | Loopy belief propagation for approximate inference: an empirical study”, - Murphy, Weiss, et al. - 1999 |
578 | Learning low-level vision
- Freeman, Pasztor
- 1999
(Show Context)
Citation Context ...cases, applications of belief propagation have nearly always been restricted to pairwise connected Markov Random Fields, where each potential function in equation 1 depends on only two variable nodes =-=[1,2]-=-. However, pairwise connected models are often insufficient to capture the full complexity of the joint distribution of the problem. In this section, we describe methods to efficiently compute belief ... |
515 | Efficient belief propagation for early vision.
- FELZENSZWALB, HUTTENLOCHER
- 2006
(Show Context)
Citation Context ... vi = ±1 for all i, and messages are represented as uniform-width histograms, then each integrand in equation v2 3 9 can be reduced to a O(M log M) computation using discrete Fourier transforms as in =-=[17]-=-. Although we describe our approach for sum-product belief propagation, the same approach is valid for max-product belief propagation. For max-product belief propagation, each maximal in equation 9 ca... |
512 | Image denoising using scale mixtures of Gaussians in the wavelet domain
- Portilla, Strela, et al.
- 2003
(Show Context)
Citation Context ... range of high dimensional potential functions. These include all axis-aligned generalized Gaussian distributions and Gaussian Scale Mixtures, which are popular for natural image models and denoising =-=[27]-=-. Since additional nonlinear potential functions of pairs of variables yi and yi+1 can be embedded into equation 25 at no additional computational cost, many non axis-aligned Gaussians and other poten... |
488 | Convergent tree-reweighted message passing for energy minimization
- Kolmogorov
(Show Context)
Citation Context ...he “maximals” of max-product belief propagation. Another method for improving the performance and convergence properties of the original belief propagation equation is to use tree reweighting methods =-=[45,46]-=-. Tree-reweighted extensions to belief propagation apply to both sumproduct belief propagation [45] and max-product belief propagation [46]. In [46], tree-reweighting methods ensure convergence for a ... |
473 | Generalized belief propagation
- Yedidia, Freeman, et al.
- 2000
(Show Context)
Citation Context ...� g∈N (i) φf m t−1 g→i (xi) (2) � � � xN (f) j∈N (f)\i m t j→f (xj) dx (3) m t g→i(xi) (4) where f and g are factor nodes, i and j are variable nodes, and N (i) is the set of neighbors of node i (see =-=[10,11,7]-=- for further details on classic belief propagation). The integrand of equation 3 sums over a function whose range is of dimensionality N (i) − 1, and the integral must be evaluated for each value of x... |
350 | Stereo Matching Using Belief Propagation,”
- Sun, Shum, et al.
- 2002
(Show Context)
Citation Context ...nnected models. The ability to use more accurate models of image or range image priors has the potential to significantly aid the performance of several computer vision applications, including stereo =-=[2]-=-, photometric Preprint submitted to Elsevier 2 November 2007sstereo [3], shape-from-shading [8], image-based rendering [9], segmentation, and matting [6]. 2. Belief Propagation Belief propagation is a... |
327 |
The Radon Transform and Some of Its Applications
- Deans
- 1983
(Show Context)
Citation Context ...10 becomes � ˜P (x) = gv(x · v)dv (13) |v|=1 Now consider the Radon transform � R[P (x)](ρ, v) = P (x)δ(x · v − ρ)dx (14) where v is constrained to be of unit norm. The adjoint of the Radon transform =-=[26]-=- has the form R † � [ψ(ρ, v)](x) = ψ(x · v, v) dv (15) |v|=1 The Radon transform is invertible [26], and since the adjoint of an invertible function is itself invertible, equation 15 is also invertibl... |
295 | Correctness of belief propagation in Gaussian graphical models of arbitrary topology
- Weiss, Freeman
(Show Context)
Citation Context ...lts in a number of fields [12,13,1–3]. Later, several theoretical results demonstrated that belief propagation could 2 be expected to achieve high quality approximations in a variety of circumstances =-=[14,15]-=-. More recently, it was shown that when sum-product belief propagation converges, the resulting marginals form a minima of the Bethe free energy, a quantity from statistical physics which can be thoug... |
292 | Field of experts: A framework for learning image priors.
- Roth, Black
- 2005
(Show Context)
Citation Context ... approximating multivariate, continuous probability distributions P (x). Projection pursuit density estimation [18], Minimax Entropy and FRAME [19,20], Products of Experts [21], and Fields of Experts =-=[22]-=- all work by approximating distributions P (x) as products of linear constraint nodes (as in equation 10). Previously, performing inference over these graphical models typically required using gradien... |
292 | Approximating probabilistic inference in Bayesian belief networks is NP-hard
- Dagum, Luby
- 1993
(Show Context)
Citation Context ...probabilistic models can be simplified by factoring large probability distributions using graphical models. Unfortunately, statistical inference in arbitrary factorized distributions is still NP-hard =-=[1]-=-. Recently, the method of loopy belief propagation has been shown to produce excellent results in several real-world computer vision problems [2–7]. However, this method has some drawbacks. The most s... |
278 | Nonparametric belief propagation
- Sudderth, Ihler, et al.
- 2003
(Show Context)
Citation Context ... parameters of a scene, such as 3D shape or surface material, which often have bimodal or multimodal messages and beliefs. The shape-from-shading application of [8] and the facial appearance model of =-=[32]-=- are two examples of applications of belief propagation with highly bimodal messages and marginals. Among those problems where the Gaussian approximation is effective, many can be solved more simply u... |
241 | On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs
- Weiss, Freeman
(Show Context)
Citation Context ...lts in a number of fields [12,13,1–3]. Later, several theoretical results demonstrated that belief propagation could 2 be expected to achieve high quality approximations in a variety of circumstances =-=[14,15]-=-. More recently, it was shown that when sum-product belief propagation converges, the resulting marginals form a minima of the Bethe free energy, a quantity from statistical physics which can be thoug... |
231 | FRAME: Filters, random fields and maximum entropy—to a unified theory for texture modeling
- Zhu, Wu, et al.
- 1996
(Show Context)
Citation Context ... P (x) = gk(x · vk) (10) k=1 have been very successful in approximating multivariate, continuous probability distributions P (x). Projection pursuit density estimation [18], Minimax Entropy and FRAME =-=[19,20]-=-, Products of Experts [21], and Fields of Experts [22] all work by approximating distributions P (x) as products of linear constraint nodes (as in equation 10). Previously, performing inference over t... |
225 | A New Class of Upper Bounds on the Log Partition Function”,
- Wainwright, Jaakkola, et al.
- 2005
(Show Context)
Citation Context ...he “maximals” of max-product belief propagation. Another method for improving the performance and convergence properties of the original belief propagation equation is to use tree reweighting methods =-=[45,46]-=-. Tree-reweighted extensions to belief propagation apply to both sumproduct belief propagation [45] and max-product belief propagation [46]. In [46], tree-reweighting methods ensure convergence for a ... |
222 | Minimax entropy principle and its applications to texture modeling
- Zhu, Wu, et al.
- 1997
(Show Context)
Citation Context ... P (x) = gk(x · vk) (10) k=1 have been very successful in approximating multivariate, continuous probability distributions P (x). Projection pursuit density estimation [18], Minimax Entropy and FRAME =-=[19,20]-=-, Products of Experts [21], and Fields of Experts [22] all work by approximating distributions P (x) as products of linear constraint nodes (as in equation 10). Previously, performing inference over t... |
189 | Products of Experts, in
- Hinton
- 1999
(Show Context)
Citation Context ...have been very successful in approximating multivariate, continuous probability distributions P (x). Projection pursuit density estimation [18], Minimax Entropy and FRAME [19,20], Products of Experts =-=[21]-=-, and Fields of Experts [22] all work by approximating distributions P (x) as products of linear constraint nodes (as in equation 10). Previously, performing inference over these graphical models typi... |
139 | Iterative decoding of compound codes by probability propagation in graphical models. - Kschischang, Frey - 1998 |
137 | CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation.
- Raymond, Yuille
- 2002
(Show Context)
Citation Context ...plication of belief propagation to networks with loops. Furthermore, this discovery has lead to new belief propagation methods that minimize Bethe free energy directly, and are guaranteed to converge =-=[16,7]-=-. The computational shortcuts we describe in section 3 are compatible with these convergent variants of belief propagation, and in section 6, we will discuss these convergent methods in greater detail... |
112 | An iterative optimization approach for unified image segmentat ion and matting, in
- Wang, Cohen
- 2005
(Show Context)
Citation Context ...computer vision applications, including stereo [2], photometric Preprint submitted to Elsevier 2 November 2007sstereo [3], shape-from-shading [8], image-based rendering [9], segmentation, and matting =-=[6]-=-. 2. Belief Propagation Belief propagation is a method for estimating the singlevariate marginals of a multivariate probability distribution of the form: p(X) ∝ � φi(xi) xi ⊂ X (1) Such probability di... |
109 |
Understanding belief propagation and its generalizations,” Exploring
- Yedidia, Freeman, et al.
- 2003
(Show Context)
Citation Context ... the distance between the true multivariate probability distribution and the estimated single-variate marginals [19]. The quality of this approximation improves as larger cliques are grouped together =-=[42]-=-. As an extreme example, consider that any probability distribution can be represented by a factor graph with a single factor node connected to each variable node. Inference using belief propagation i... |
102 | P3 & beyond: Solving energies with higher order cliques.
- Kohli, Kumar, et al.
- 2007
(Show Context)
Citation Context ...α, α) ≥ φ(α, x2) + φ(x1, α) (2) 3 Intuitively, regular potential functions must encourage its associated variables to be equal. Potential functions of three or more variables have similar constraints =-=[13]-=-. These constraints are compatible with many simple models of spatial priors for images [14]. However, many real-world applications require potential functions that violate these constraints, such as ... |
98 |
Discriminative random fields,”
- Kumar, Herbert
- 2006
(Show Context)
Citation Context ...s, are all highly successful in computer vision, and have contributed to algorithms for image denoising [33], shape-from10 stereo [34], motion [35], texture classification [36], region classification =-=[37]-=-, and segmentation [38]. Previous statistical inference approaches that exploited multiple-resolution methods were limited to simple Gaussian models or gradient descent optimization methods. The use o... |
97 | A Simple Approach to Bayesian Network Computations
- Zhang, Poole
- 1994
(Show Context)
Citation Context ...ch exponential in N , the dimensionality of the clique). When P (X) can be factorized (as in equation 3), single variate marginals can be computed efficiently using the Variable Elimination Algorithm =-=[40]-=-. Note that this algorithm differs from belief propagation in that rather than computing all single-variate marginals of a distribution the elimination algorithm finds the marginal of only one variabl... |
92 | What makes a good model of natural images - Weiss, Freeman - 2007 |
86 | A general algorithm for approximate inference and its application to hybrid Bayes nets
- Koller, Lerner, et al.
- 1999
(Show Context)
Citation Context ...(m) m=1 fi B β(m) i β (m−1) i ⎧ ⎨ 1 x ∈ [β0, β1) ⎩ 0 otherwise (xi) (48) (49) Variable-width bin histograms have been used successfully to improve the speed and performance of the join tree algorithm =-=[34,35]-=-. Here we show that such a representation, when applied to belief propagation, can overcome the obstacles encountered in applying particle-based representations to Heske’s guaranteed-convergent LBP va... |
81 | Efficient Belief Propagation with learned higher-order Markov random fields
- Lan, Roth, et al.
- 2006
(Show Context)
Citation Context ... Fields of Experts filters. Recently, an attempt was made at performing inference in Fields of Experts models using loopy belief propagation, and the approach was tested on an image denoising problem =-=[4]-=-. The authors showed that using three 2×2 Fields of Experts filters yields a significant improvement over pairwise models. In their approach, the authors mitigate the computational complexity of equat... |
79 | On the uniqueness of loopy belief propagation fixed points.
- Heskes
- 2004
(Show Context)
Citation Context ...hat it is not guaranteed to converge. Convergence becomes increasingly unlikely when the factor graph contains many tight loops, or when potential functions are “high energy,” or nearly deterministic =-=[30]-=-. The application in higher-order spatial priors in section 8 contains a high number of very tight loops. Also, applications that use hard linear constraint nodes (such as the shape-from-shading appli... |
79 | Location-based activity recognition
- Fox
- 2007
(Show Context)
Citation Context ...her useful application of hard linear constraint nodes is the ability to aggregate over a set of local data to compute global features, such as by summing over several variable nodes. For example, in =-=[39]-=-, the authors seek to infer the location and activity of a person from a stream of several days worth of GPS coordinates. In order to place a prior over the number of times a given activity occurs in ... |
77 | Nonuniform dynamic discretization in hybrid networks
- Kozlov, Koller
- 1997
(Show Context)
Citation Context ...(m) m=1 fi B β(m) i β (m−1) i ⎧ ⎨ 1 x ∈ [β0, β1) ⎩ 0 otherwise (xi) (48) (49) Variable-width bin histograms have been used successfully to improve the speed and performance of the join tree algorithm =-=[34,35]-=-. Here we show that such a representation, when applied to belief propagation, can overcome the obstacles encountered in applying particle-based representations to Heske’s guaranteed-convergent LBP va... |
73 | Projection pursuit density estimation.
- Friedman, Stuetzle, et al.
- 1984
(Show Context)
Citation Context ... nodes, of the form P (x) ≈ ˜ K� P (x) = gk(x · vk) (10) k=1 have been very successful in approximating multivariate, continuous probability distributions P (x). Projection pursuit density estimation =-=[18]-=-, Minimax Entropy and FRAME [19,20], Products of Experts [21], and Fields of Experts [22] all work by approximating distributions P (x) as products of linear constraint nodes (as in equation 10). Prev... |
63 | Multiscale Bayesian segmentation using a trainable context model.
- Cheng, Bouman
- 2001
(Show Context)
Citation Context ...ssful in computer vision, and have contributed to algorithms for image denoising [33], shape-from10 stereo [34], motion [35], texture classification [36], region classification [37], and segmentation =-=[38]-=-. Previous statistical inference approaches that exploited multiple-resolution methods were limited to simple Gaussian models or gradient descent optimization methods. The use of hard linear constrain... |
62 | Approximate inference and constrained optimization.
- Heskes, Albers, et al.
- 2003
(Show Context)
Citation Context ...sentation of belief propagation messages that is simultaneously compatible with higher-order non-pairwise interactions and also with recent extensions to belief propagation that guarantee convergence =-=[7]-=-. These advancements allow us to efficiently solve inference problems that were previously unavailable to belief propagation. In section 8, we show that a prior model of natural images using 2 × 2 MRF... |
39 | Efficient belief propagation for vision using linear constraint nodes
- Potetz
- 2007
(Show Context)
Citation Context ...lication to Higher-Order Spatial Priors Several state-of-the-art computer vision algorithms use belief propagation. A number of these, including stereo [2], photometric stereo [3], shape-from-shading =-=[8]-=-, imagebased rendering [9], segmentation and matting [6] work over a grid at the pixel level. These algorithms solve ambiguous and underconstrained problems, where having a strong prior for images or ... |
31 | Enforcing integrability for surface reconstruction algorithms using belief propagation in graphical models - Petrovic, Cohen, et al. - 2001 |
27 |
A Tourist Guide Through Treewidth, Acta Cybernetica
- Bodlaender
- 1993
(Show Context)
Citation Context ...the variable elimination algorithm is O(NM T +1 ), where M is the number of states of each variable, and T is the treewidth of the markov random field (MRF) underlying the factorization of P (X) (see =-=[29]-=- for a review of the treewidth of a graph). Unless the graph is dense, T + 1 is typically less than the dimensionality of X, and so the variable elimination algorithm represents a substantial improvem... |
27 |
Wavelets for texture analysis, an overview”,
- Livens, Scheunders, et al.
- 1997
(Show Context)
Citation Context ... and image-pyramid techniques, are all highly successful in computer vision, and have contributed to algorithms for image denoising [33], shape-from10 stereo [34], motion [35], texture classification =-=[36]-=-, region classification [37], and segmentation [38]. Previous statistical inference approaches that exploited multiple-resolution methods were limited to simple Gaussian models or gradient descent opt... |
26 |
Hierarchical modelbased motion estimation. In: ECCV.
- Bergen, Anandan, et al.
- 1992
(Show Context)
Citation Context ... as wavelet-domain processing and image-pyramid techniques, are all highly successful in computer vision, and have contributed to algorithms for image denoising [33], shape-from10 stereo [34], motion =-=[35]-=-, texture classification [36], region classification [37], and segmentation [38]. Previous statistical inference approaches that exploited multiple-resolution methods were limited to simple Gaussian m... |
18 |
Dense photometric stereo using tensorial belief propagation. CVPR
- Tang, Tang, et al.
- 2005
(Show Context)
Citation Context ...ge image priors has the potential to significantly aid the performance of several computer vision applications, including stereo [2], photometric Preprint submitted to Elsevier 2 November 2007sstereo =-=[3]-=-, shape-from-shading [8], image-based rendering [9], segmentation, and matting [6]. 2. Belief Propagation Belief propagation is a method for estimating the singlevariate marginals of a multivariate pr... |
18 | Utilizing variational optimization to learn markov random fields, in: CVPR 2007 - Tappen - 2007 |
18 |
Shape Matching with Belief Propagation: Using Dynamic Quantization to Accomodate Occlusion and Clutter
- Coughlan, Shen
- 2004
(Show Context)
Citation Context ... split high-likelihood bins apart. This strategy is related to some previous works that adaptively restrict the search space of belief propagation to only those states with high predicted likelihoods =-=[36]-=-. Another strategy is to run a special, single iteration of belief propagation where each bin is first split into 2 or 3 bins. Following this high-resolution iteration, bins can be recombined until on... |
15 | Fields of experts for image-based rendering
- Woodford, Reid, et al.
(Show Context)
Citation Context ...aid the performance of several computer vision applications, including stereo [2], photometric Preprint submitted to Elsevier 2 November 2007sstereo [3], shape-from-shading [8], image-based rendering =-=[9]-=-, segmentation, and matting [6]. 2. Belief Propagation Belief propagation is a method for estimating the singlevariate marginals of a multivariate probability distribution of the form: p(X) ∝ � φi(xi)... |
5 | Algorithms from statistical physics for generative models of images - Coughlan, Yuille - 2003 |
5 |
An algorithm to minimize the Bethe free energy
- Yuille
- 2001
(Show Context)
Citation Context ...ginals minimize a quantity from statistical physics known as the Bethe free energy [11]. This has lead to the development of belief propagation algorithms that minimize the Bethe free energy directly =-=[31,7]-=-, and do so while ensuring convergence. In the examples presented here, we use the algorithm described in [7], which modifies equations 2 and 3 by: m t i→f (xi) = m t i→f (xi) 1−ni ni m t � f→i(xi) = ... |
5 |
Pampas: Real-valued graphical models for comnputer vision
- Isard
- 2003
(Show Context)
Citation Context ...) = w (m) fi M� m=1 = � j∈N (f)\i w (m) fi φf (xi, µ (m) N (f)\i,f ) (42) w (m) jf (43) where µ (m) N (f)\i,f is a vector composed of the mth particles from each message ˜mj→f such that j ∈ N (f) \ i =-=[33]-=-. If φf is not of a simple form, then it is helpful to perform an additional step where we define ˜mf→i(xi) by sampling from equation 42, to simplify subsequent computations [32]. In this case, let µ ... |
2 |
Signal matching through scale space, in: Readings in computer vision: issues, problems, principles, and paradigms
- Witkin, Terzopoulis, et al.
- 1987
(Show Context)
Citation Context ...proaches such as wavelet-domain processing and image-pyramid techniques, are all highly successful in computer vision, and have contributed to algorithms for image denoising [33], shape-from10 stereo =-=[34]-=-, motion [35], texture classification [36], region classification [37], and segmentation [38]. Previous statistical inference approaches that exploited multiple-resolution methods were limited to simp... |