Results 1  10
of
34
Video epitomes
 in Proc. IEEE Conf. Comput. Vis. Pattern Recog
, 2005
"... Recently, “epitomes ” were introduced as patchbased probability models that are learned by compiling together a large number of examples of patches from input images. In this paper, we describe how epitomes can be used to model video data and we describe significant computational speedups that can ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
Recently, “epitomes ” were introduced as patchbased probability models that are learned by compiling together a large number of examples of patches from input images. In this paper, we describe how epitomes can be used to model video data and we describe significant computational speedups that can be incorporated into the epitome inference and learning algorithm. In the case of videos, epitomes are estimated so as to model most of the small spacetime cubes from the input data. Then, the epitome can be used for various modeling and reconstruction tasks, of which we show results for video superresolution, video interpolation, and object removal. Besides computational efficiency, an interesting advantage of the epitome as a representation is that it can be reliably estimated even from videos with large amounts of missing data. We illustrate this ability on the task of reconstructing the dropped frames in video broadcast using only the degraded video and also in denoising a severely corrupted video. 1
Efficient MRF deformation model for nonrigid image matching
 In IEEE Transactions on International Conference on Pattern Recognition
, 2007
"... We propose a novel MRFbased model for deformable image matching. Given two images, the task is to estimate a mapping from one image to the other maximizing the quality of the match. We consider mappings defined by a discrete deformation field constrained to preserve 2D continuity. We pose the task ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
We propose a novel MRFbased model for deformable image matching. Given two images, the task is to estimate a mapping from one image to the other maximizing the quality of the match. We consider mappings defined by a discrete deformation field constrained to preserve 2D continuity. We pose the task as finding MAP configurations of a pairwise MRF. We propose a more compact MRF representation of the problem which leads to a weaker, though computationally more tractable, linear programming relaxation – the approximation technique we choose to apply. The number of dual LP variables grows linearly with the search window side, rather than quadratically as in previous approaches. To solve the relaxed problem (suboptimally), we apply TRWS (Sequential TreeReweighted Message passing) algorithm [13, 5]. Using our representation and the chosen optimization scheme, we are able to match much wider deformations than was considered previously in global optimization framework. We further elaborate on continuity and data terms to achieve more appropriate description of smooth deformations. The performance of our technique is demonstrated on both synthetic and realworld experiments. 1.
Unsupervised Segmentation of Objects using Efficient Learning
"... We describe an unsupervised method to segment objects detected in images using a novel variant of an interest point template, which is very efficient to train and evaluate. Once an object has been detected, our method segments an image using a Conditional Random Field (CRF) model. This model integra ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We describe an unsupervised method to segment objects detected in images using a novel variant of an interest point template, which is very efficient to train and evaluate. Once an object has been detected, our method segments an image using a Conditional Random Field (CRF) model. This model integrates image gradients, the location and scale of the object, the presence of object parts, and the tendency of these parts to have characteristic patterns of edges nearby. We enhance our method using multiple unsegmented images of objects to learn the parameters of the CRF, in an iterative conditional maximization framework. We show quantitative results on images of real scenes that demonstrate the accuracy of segmentation. 1.
Efficient unsupervised learning for localization and detection in object categories
 Advances in Neural Information Processing Systems 18
, 2005
"... We describe a novel method for learning templates for recognition and localization of objects drawn from categories. A generative model represents the configuration of multiple object parts with respect to an object coordinate system; these parts in turn generate image features. The complexity of th ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We describe a novel method for learning templates for recognition and localization of objects drawn from categories. A generative model represents the configuration of multiple object parts with respect to an object coordinate system; these parts in turn generate image features. The complexity of the model in the number of features is low, meaning our model is much more efficient to train than comparative methods. Moreover, a variational approximation is introduced that allows learning to be orders of magnitude faster than previous approaches while incorporating many more features. This results in both accuracy and localization improvements. Our model has been carefully tested on standard datasets; we compare with a number of recent template models. In particular, we demonstrate stateoftheart results for detection and localization. 1
Loop series and Bethe variational bounds in attractive graphical models
, 2008
"... Variational methods are frequently used to approximate or bound the partition or likelihood function of a Markov random field. Methods based on mean field theory are guaranteed to provide lower bounds, whereas certain types of convex relaxations provide upper bounds. In general, loopy belief propaga ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Variational methods are frequently used to approximate or bound the partition or likelihood function of a Markov random field. Methods based on mean field theory are guaranteed to provide lower bounds, whereas certain types of convex relaxations provide upper bounds. In general, loopy belief propagation (BP) provides often accurate approximations, but not bounds. We prove that for a class of attractive binary models, the so–called Bethe approximation associated with any fixed point of loopy BP always lower bounds the true likelihood. Empirically, this bound is much tighter than the naive mean field bound, and requires no further work than running BP. We establish these lower bounds using a loop series expansion due to Chertkov and Chernyak, which we show can be derived as a consequence of the tree reparameterization characterization of BP fixed points.
Program verification as probabilistic inference
 In Proc. POPL
, 2007
"... In this paper, we propose a new algorithm for proving the validity or invalidity of a pre/postcondition pair for a program. The algorithm is motivated by the success of the algorithms for probabilistic inference developed in the machine learning community for reasoning in graphical models. The valid ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
In this paper, we propose a new algorithm for proving the validity or invalidity of a pre/postcondition pair for a program. The algorithm is motivated by the success of the algorithms for probabilistic inference developed in the machine learning community for reasoning in graphical models. The validity or invalidity proof consists of providing an invariant at each program point that can be locally verified. The algorithm works by iteratively randomly selecting a program point and updating the current abstract state representation to make it more locally consistent (with respect to the abstractions at the neighboring points). We show that this simple algorithm has some interesting aspects: (a) It brings together the complementary powers of forward and backward analyses; (b) The algorithm has the ability to recover itself from excessive underapproximation or overapproximation that it may make. (Because the algorithm does not distinguish between the forward and backward information, the information could get both underapproximated and overapproximated at any step.) (c) The randomness in the algorithm ensures that the correct choice of updates is eventually made as there is no single deterministic strategy that would provably work for any interesting class of programs. In our experiments we use this algorithm to produce the proof of correctness of a small (but nontrivial) example. In addition, we empirically illustrate several important properties of the algorithm.
Bayesian Separation of Images Modeled With MRFs Using MCMC
"... Abstract—We investigate the source separation problem of random fields within a Bayesian framework. The Bayesian formulation enables the incorporation of prior image models in the estimation of sources. Due to the intractability of the analytical solution, we resort to numerical methods for the join ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract—We investigate the source separation problem of random fields within a Bayesian framework. The Bayesian formulation enables the incorporation of prior image models in the estimation of sources. Due to the intractability of the analytical solution, we resort to numerical methods for the joint maximization of the a posteriori distribution of the unknown variables and parameters. We construct the prior densities of pixels using Markov random fields based on a statistical model of the gradient image, and we use a fully Bayesian method with modifiedGibbs sampling. We contrast our work to approximate Bayesian solutions such as Iterated Conditional Modes (ICM) and to nonBayesian solutions of ICA variety. The performance of the method is tested on synthetic mixtures of texture images and astrophysical images under various noise scenarios. The proposed method is shown to outperform significantly both its approximate Bayesian and nonBayesian competitors. Index Terms—Astrophysical images, Bayesian source separation,
Efficiently Learning Random Fields for Stereo Vision with Sparse Message Passing
"... Abstract. As richer models for stereo vision are constructed, there is a growing interest in learning model parameters. To estimate parameters in Markov Random Field (MRF) based stereo formulations, one usually needs to perform approximate probabilistic inference. Message passing algorithms based on ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. As richer models for stereo vision are constructed, there is a growing interest in learning model parameters. To estimate parameters in Markov Random Field (MRF) based stereo formulations, one usually needs to perform approximate probabilistic inference. Message passing algorithms based on variational methods and belief propagation are widely used for approximate inference in MRFs. Conditional Random Fields (CRFs) are discriminative versions of traditional MRFs and have recently been applied to the problem of stereo vision. However, CRF parameter training typically requires expensive inference steps for each iteration of optimization. Inference is particularly slow when there are many discrete disparity levels, due to high state space cardinality. We present a novel CRF for stereo matching with an explicit occlusion model and propose sparse message passing to dramatically accelerate the approximate inference needed for parameter optimization. We show that sparse variational message passing iteratively minimizes the KL divergence between the approximation and model distributions by optimizing a lower bound on the partition function. Our experimental results show reductions in inference time of one order of magnitude with no loss in approximation quality. Learning using sparse variational message passing improves results over prior work using graph cuts. 1
Q.: Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation
 IEEE Tr. on IP
, 2011
"... Abstract—Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract—Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chainlike CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex realworld problems. Index Terms—Activity recognition, Bayesian networks (BNs), chain graph (CG), factor graph (FG), graphical model learning
On Variational Message Passing on Factor Graphs
, 2007
"... In this paper, it is shown how (naive and structured) variational algorithms may be derived from a factor graph by mechanically applying generic message computation rules; in this way, one can bypass errorprone variational calculus. In prior work by Bishop et al., Xing et al., and Geiger, directed ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper, it is shown how (naive and structured) variational algorithms may be derived from a factor graph by mechanically applying generic message computation rules; in this way, one can bypass errorprone variational calculus. In prior work by Bishop et al., Xing et al., and Geiger, directed and undirected graphical models have been used for this purpose. The factor graph notation amounts to simpler generic variational message computation rules; by means of factor graphs, variational methods can straightforwardly be compared to and combined with various other messagepassing inference algorithms, e.g., Kalman filters and smoothers, iterated conditional modes, expectation maximization (EM), gradient methods, and particle filters. Some of those combinations have been explored in the literature, others seem to be new. Generic message computation rules for such combinations are formulated. 1.