Results 11  20
of
340
Efficient belief propagation with learned higherorder Markov random fields
 IN ECCV (2
, 2006
"... Belief propagation (BP) has become widely used for lowlevel vision problems and various inference techniques have been proposed for loopy graphs. These methods typically rely on ad hoc spatial priors such as the Potts model. In this paper we investigate the use of learned models of image structure, ..."
Abstract

Cited by 59 (2 self)
 Add to MetaCart
Belief propagation (BP) has become widely used for lowlevel vision problems and various inference techniques have been proposed for loopy graphs. These methods typically rely on ad hoc spatial priors such as the Potts model. In this paper we investigate the use of learned models of image structure, and demonstrate the improvements obtained over previous ad hoc models for the image denoising problem. In particular, we show how both pairwise and higherorder Markov random fields with learned clique potentials capture rich image structures that better represent the properties of natural images. These models are learned using the recently proposed FieldsofExperts framework. For such models, however, traditional BP is computationally expensive. Consequently we propose some approximation methods that make BP with learned potentials practical. In the case of pairwise models we propose a novel approximation of robust potentials using a finite family of quadratics. In the case of higher order MRFs, with 2 × 2 cliques, we use an adaptive state space to handle the increased complexity. Extensive experiments demonstrate the power of learned models, the benefits of higherorder MRFs and the practicality of BP for these problems with the use of simple principled approximations.
Distributed occlusion reasoning for tracking with nonparametric belief propagation
 In NIPS
, 2004
"... We describe a three–dimensional geometric hand model suitable for visual tracking applications. The kinematic constraints implied by the model’s joints have a probabilistic structure which is well described by a graphical model. Inference in this model is complicated by the hand’s many degrees of fr ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
We describe a three–dimensional geometric hand model suitable for visual tracking applications. The kinematic constraints implied by the model’s joints have a probabilistic structure which is well described by a graphical model. Inference in this model is complicated by the hand’s many degrees of freedom, as well as multimodal likelihoods caused by ambiguous image measurements. We use nonparametric belief propagation (NBP) to develop a tracking algorithm which exploits the graph’s structure to control complexity, while avoiding costly discretization. While kinematic constraints naturally have a local structure, self– occlusions created by the imaging process lead to complex interpendencies in color and edge–based likelihood functions. However, we show that local structure may be recovered by introducing binary hidden variables describing the occlusion state of each pixel. We augment the NBP algorithm to infer these occlusion variables in a distributed fashion, and then analytically marginalize over them to produce hand position estimates which properly account for occlusion events. We provide simulations showing that NBP may be used to refine inaccurate model initializations, as well as track hand motion through extended image sequences. 1
Locationbased activity recognition
 In Advances in Neural Information Processing Systems (NIPS
, 2005
"... Learning patterns of human behavior from sensor data is extremely important for highlevel activity inference. We show how to extract and label a person’s activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies ..."
Abstract

Cited by 54 (6 self)
 Add to MetaCart
Learning patterns of human behavior from sensor data is extremely important for highlevel activity inference. We show how to extract and label a person’s activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the highlevel context into account. Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. We apply FFTbased message passing to perform efficient summation over large numbers of nodes in the networks. We present experiments that show significant improvements over existing techniques. 1
Approximate inference and constrained optimization
 In 19th UAI
, 2003
"... Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always con ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always converge, which motivates approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP (Yuille, 2002) and UPS (Teh and Welling, 2002). Here we describe a class of algorithms that solves this typically nonconvex constrained minimization problem through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speedups over CCCP. 1
Nonparametric belief propagation for selflocalization of sensor networks
 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
, 2005
"... Automatic selflocalization is a critical need for the effective use of adhoc sensor networks in military or civilian applications. In general, selflocalization involves the combination of absolute location information (e.g. GPS) with relative calibration information (e.g. distance measurements b ..."
Abstract

Cited by 49 (3 self)
 Add to MetaCart
Automatic selflocalization is a critical need for the effective use of adhoc sensor networks in military or civilian applications. In general, selflocalization involves the combination of absolute location information (e.g. GPS) with relative calibration information (e.g. distance measurements between sensors) over regions of the network. Furthermore, it is generally desirable to distribute the computational burden across the network and minimize the amount of intersensor communication. We demonstrate that the information used for sensor localization is fundamentally local with regard to the network topology and use this observation to reformulate the problem within a graphical model framework. We then present and demonstrate the utility of nonparametric belief propagation (NBP), a recent generalization of particle filtering, for both estimating sensor locations and representing location uncertainties. NBP has the advantage that it is easily implemented in a distributed fashion, admits a wide variety of statistical models, and can represent multimodal uncertainty. Using simulations of small to moderatelysized sensor networks, we show that NBP may be made robust to outlier measurement errors by a simple model augmentation, and that judicious message construction can result in better estimates. Furthermore, we provide an analysis of NBP’s communications requirements, showing that typically only a few messages per sensor are required, and that even low bitrate approximations of these messages can have little or no performance impact.
Divergence Measures and Message Passing
, 2005
"... This paper presents a unifying view of messagepassing algorithms, as methods to approximate a complex Bayesian network by a simpler network with minimum information divergence. In this view, the difference between meanfield methods and belief propagation is not the amount of structure they model, b ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
This paper presents a unifying view of messagepassing algorithms, as methods to approximate a complex Bayesian network by a simpler network with minimum information divergence. In this view, the difference between meanfield methods and belief propagation is not the amount of structure they model, but only the measure of loss they minimize (‘exclusive ’ versus ‘inclusive’ KullbackLeibler divergence). In each case, messagepassing arises by minimizing a localized version of the divergence, local to each factor. By examining these divergence measures, we can intuit the types of solution they prefer (symmetrybreaking, for example) and their suitability for different tasks. Furthermore, by considering a wider variety of divergence measures (such as alphadivergences), we can achieve different complexity and performance goals. 1
A New Look at Survey Propagation and its Generalizations
"... We study the survey propagation algorithm [19, 5, 4], which is an iterative technique that appears to be very effective in solving random kSAT problems even with densities close to threshold. We first describe how any SAT formula can be associated with a novel family of Markov random fields (MRFs), ..."
Abstract

Cited by 46 (12 self)
 Add to MetaCart
We study the survey propagation algorithm [19, 5, 4], which is an iterative technique that appears to be very effective in solving random kSAT problems even with densities close to threshold. We first describe how any SAT formula can be associated with a novel family of Markov random fields (MRFs), parameterized by a real number ρ. We then show that applying belief propagation— a wellknown “messagepassing” technique—to this family of MRFs recovers various algorithms, ranging from pure survey propagation at one extreme (ρ = 1) to standard belief propagation on the uniform distribution over SAT assignments at the other extreme (ρ = 0). Configurations in these MRFs have a natural interpretation as generalized satisfiability assignments, on which a partial order can be defined. We isolate cores as minimal elements in this partial
MAP Estimation, Linear Programming and Belief Propagation with Convex Free Energies
, 2007
"... Finding the most probable assignment (MAP) in a general graphical model is known to be NP hard but good approximations have been attained with maxproduct belief propagation (BP) and its variants. In particular, it is known that using BP on a singlecycle graph or tree reweighted BP on an arbitrary ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
Finding the most probable assignment (MAP) in a general graphical model is known to be NP hard but good approximations have been attained with maxproduct belief propagation (BP) and its variants. In particular, it is known that using BP on a singlecycle graph or tree reweighted BP on an arbitrary graph will give the MAP solution if the beliefs have no ties. In this paper we extend the setting under which BP can be used to provably extract the MAP. We define Convex BP as BP algorithms based on a convex free energy approximation and show that this class includes ordinary BP with singlecycle, tree reweighted BP and many other BP variants. We show that when there are no ties, fixedpoints of convex maxproduct BP will provably give the MAP solution. We also show that convex sumproduct BP at sufficiently small temperatures can be used to solve linear programs that arise from relaxing the MAP problem. Finally, we derive a novel condition that allows us to derive the MAP solution even if some of the convex BP beliefs have ties. In experiments, we show that our theorems allow us to find the MAP in many realworld instances of graphical models where exact inference using junctiontree is impossible.
Infinite hidden relational models
 In Proceedings of the 22nd International Conference on Uncertainity in Artificial Intelligence (UAI
, 2006
"... Relational learning analyzes the probabilistic constraints between the attributes of entities and relationships. We extend the expressiveness of relational models by introducing for each entity (or object) an infinitedimensional latent variable as part of a Dirichlet process (DP) mixture model. We d ..."
Abstract

Cited by 45 (17 self)
 Add to MetaCart
Relational learning analyzes the probabilistic constraints between the attributes of entities and relationships. We extend the expressiveness of relational models by introducing for each entity (or object) an infinitedimensional latent variable as part of a Dirichlet process (DP) mixture model. We discuss inference in the model, which is based on a DP Gibbs sampler, i.e., the Chinese restaurant process. We extend the Chinese restaurant process to be applicable to relational modeling. We discuss how information is propagated in the network of latent variables, reducing the necessity for extensive structural learning. In the context of a recommendation engine our approach realizes a principled solution for recommendations based on features of items, features of users and relational information. Our approach is evaluated in three applications: a recommendation system based on the MovieLens data set, the prediction of gene function using relational information and a medical recommendation system.
Performance vs Computational Efficiency for Optimizing Single and Dynamic MRFs: Setting the State of the Art with Primal Dual Strategies
"... In this paper we introduce a novel method to address minimization of static and dynamic MRFs. Our approach is based on principles from linear programming and, in particular, on primal dual strategies. It generalizes prior stateoftheart methods such as αexpansion, while it can also be used for ef ..."
Abstract

Cited by 43 (18 self)
 Add to MetaCart
In this paper we introduce a novel method to address minimization of static and dynamic MRFs. Our approach is based on principles from linear programming and, in particular, on primal dual strategies. It generalizes prior stateoftheart methods such as αexpansion, while it can also be used for efficiently minimizing NPhard problems with complex pairwise potential functions. Furthermore, it offers a substantial speedup of a magnitude ten over existing techniques, due to the fact that it exploits information coming not only from the original MRF problem, but also from a dual one. The proposed technique consists of recovering pair of solutions for the primal and the dual such that the gap between them is minimized. Therefore, it can also boost performance of dynamic MRFs, where one should expect that the new new pair of primaldual solutions is closed to the previous one. Promising results in a number of applications, and theoretical, as well as numerical comparisons with the state of the art demonstrate the extreme potentials of this approach.