Results 1  10
of
68
A comparative study of energy minimization methods for Markov random fields
 In ECCV
, 2006
"... Abstract. One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Ma ..."
Abstract

Cited by 254 (24 self)
 Add to MetaCart
Abstract. One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Markov Random Fields (MRF’s), the resulting energy minimization problems were widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the topperforming stereo methods. Unfortunately, most papers define their own energy function, which is minimized with a specific algorithm of their choice. As a result, the tradeoffs among different energy minimization algorithms are not well understood. In this paper we describe a set of energy minimization benchmarks, which we use to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods—graph cuts, LBP, and treereweighted message passing—as well as the wellknown older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching and interactive segmentation. We also provide a generalpurpose software interface that allows vision researchers to easily switch between optimization methods with minimal overhead. We expect that the availability of our benchmarks and interface will make it significantly easier for vision researchers to adopt the best method for their specific problems. Benchmarks, code, results and images are available at
Filters, Random Fields and Maximum Entropy . . .
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 1998
"... This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of vi ..."
Abstract

Cited by 199 (17 self)
 Add to MetaCart
This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Our theory characterizes the ensemble of images I with the same texture appearance by a probability distribution f (I) on a random field, and the objective of texture modeling is to make inference about f (I), given a set of observed texture examples. In our theory, texture modeling consists of two steps. (1) A set of filters is selected from a general filter bank to capture features of the texture, these filters are applied to observed texture images, and the histograms of the filtered images are extracted. These histograms are estimates of the marginal distributions of f (I). This step is called feature extraction. (2) The maximum entropy principle is employed to derive a distribution p(I), which is restricted to have the same marginal distributions as those in (1). This p(I) is considered as an estimate of f (I). This step is called feature fusion. A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model, called FRAME (Filters, Random fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from p(I), thus the model is verified by seeing whether the synthesized texture images have similar visual appearances
On Conditional and Intrinsic Autoregressions
, 1995
"... This paper discusses standard and intrinsic autoregressions and describes how the problems that arise can be alleviated using Dempster's (1972) algorithm or an appropriate modification. The approach partly represents a synthesis of standard geostatistical and Gaussian Markov random field formul ..."
Abstract

Cited by 82 (6 self)
 Add to MetaCart
This paper discusses standard and intrinsic autoregressions and describes how the problems that arise can be alleviated using Dempster's (1972) algorithm or an appropriate modification. The approach partly represents a synthesis of standard geostatistical and Gaussian Markov random field formulations. Some nonspatial applications are also mentioned. Some key words: Agricultural experiments; Bayesian image analysis; Conditional autoregressions; Dempster's algorithm; Geographical epidemiology; Geostatistics; Intrinsic autoregressions; Multiway tables; Prior distributions; Spatial statistics; Surface reconstruction; Texture analysis. 1 Introduction
Pseudolikelihood estimation for social networks
 Journal of the American Statistical Association
, 1990
"... ..."
ML parameter estimation for Markov random fields, with applications to Bayesian tomography
 IEEE Trans. on Image Processing
, 1998
"... Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (so ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (sometimes referred to as hyperparameters) is difficult in practice for two reasons: 1) Direct parameter estimation for MRF’s is known to be mathematically and numerically challenging. 2) Parameters can not be directly estimated because the true image crosssection is unavailable. In this paper, we propose a computationally efficient scheme to address both these difficulties for a general class of MRF models, and we derive specific methods of parameter estimation for the MRF model known as a generalized Gaussian MRF (GGMRF). The first section of the paper derives methods of direct estimation of scale and shape parameters for a general continuously valued MRF. For the GGMRF case, we show that the ML estimate of the scale parameter, σ, has a simple closed form solution, and we present an efficient scheme for computing the ML estimate of the shape parameter, p, by an offline numerical computation of the dependence of the partition function on p.
Meaningful Alignments
, 1999
"... We propose a method for detecting geometric structures in an image, without any a priori information. Roughly speaking, we say that an observed geometric event is "meaningful" if the expectation of its occurences would be very small in a random image. We discuss the apories of this definit ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
We propose a method for detecting geometric structures in an image, without any a priori information. Roughly speaking, we say that an observed geometric event is "meaningful" if the expectation of its occurences would be very small in a random image. We discuss the apories of this definition, solve several of them by introducing "maximal meaningful events" and analyzing their structure. This methodology is applied to the detection of alignments in images.
Learning factor graphs in polynomial time and sample complexity
 JMLR
, 2006
"... We study the computational and sample complexity of parameter and structure learning in graphical models. Our main result shows that the class of factor graphs with bounded degree can be learned in polynomial time and from a polynomial number of training examples, assuming that the data is generated ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
We study the computational and sample complexity of parameter and structure learning in graphical models. Our main result shows that the class of factor graphs with bounded degree can be learned in polynomial time and from a polynomial number of training examples, assuming that the data is generated by a network in this class. This result covers both parameter estimation for a known network structure and structure learning. It implies as a corollary that we can learn factor graphs for both Bayesian networks and Markov networks of bounded degree, in polynomial time and sample complexity. Importantly, unlike standard maximum likelihood estimation algorithms, our method does not require inference in the underlying network, and so applies to networks where inference is intractable. We also show that the error of our learned model degrades gracefully when the generating distribution is not a member of the target class of networks. In addition to our main result, we show that the sample complexity of parameter learning in graphical models has an O(1) dependence on the number of variables in the model when using the KLdivergence normalized by the number of variables as the performance criterion.
Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo  Towards a "Trichromacy" Theory of Texture
, 1999
"... This article presents a mathematical denition of texture { the Julesz ensemble h), which is the set of all images (defined on Z²) that share identical statistics h. Then texture modeling is posed as an inverse problem: given a set of images sampled from an unknown Julesz ensemble h ), we search f ..."
Abstract

Cited by 33 (13 self)
 Add to MetaCart
This article presents a mathematical denition of texture { the Julesz ensemble h), which is the set of all images (defined on Z²) that share identical statistics h. Then texture modeling is posed as an inverse problem: given a set of images sampled from an unknown Julesz ensemble h ), we search for the statistics h which define the ensemble. A Julesz ensemble h) has an associated probability distribution q(I; h), which is uniform over the images in the ensemble and has zero probability outside. In a companion paper [32], q(I; h) is shown to be the limit distribution of the FRAME (Filter, Random Field, And Minimax Entropy) model[35] as the image lattice ! Z². This conclusion establishes the intrinsic link between the scientific definition of texture on Z² and the mathematical models of texture on finite lattices. It brings two advantages to computer vision. 1). The engineering practice of synthesizing texture images by matching statistics has been put on a mathematical fou...
Stochastic Relaxation on Partitions with Connected Components and Its Application to Image Segmentation
, 1998
"... We present a new method of segmentation in which images are segmented by partitions with connected components. For this, first we define two different types of neighborhoods on the space of partitions with connected components of a general graph; neighborhoods of the first type are simple but sma ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
We present a new method of segmentation in which images are segmented by partitions with connected components. For this, first we define two different types of neighborhoods on the space of partitions with connected components of a general graph; neighborhoods of the first type are simple but small, while those of the second type are large but complex; second, we give algorithms which are not computationally costly, for probability simulation and simulated annealing on such spaces using the neighborhoods. In particular Hastings algorithms and generalized Metropolis algorithms are defined to avoid heavy computations in the case of the second type of neighborhoods. To realize segmentation, we propose a hierarchical approach which at each step minimizes a cost function on the space of partitions with connected components of a graph.
Bayesian Structure From Motion
"... We formulate structure from motion as a Bayesian inference problem, and use a Markov chain Monte Carlo sampler to sample the posterior on this problem. This results in a method that can identify both small and large tracker errors, and yields reconstructions that are stable in the presence of these ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
We formulate structure from motion as a Bayesian inference problem, and use a Markov chain Monte Carlo sampler to sample the posterior on this problem. This results in a method that can identify both small and large tracker errors, and yields reconstructions that are stable in the presence of these errors. Furthermore, the method gives detailed information on the range of ambiguities in structure given a particular dataset, and requires no special geometric formulation to cope with degenerate situations. Motion segmentation is obtained by a layer of discrete variables associating a point with an object. We demonstrate a sampler that successfully samples an approximation to the marginal on this domain, producing a relatively unambiguous segmentation.