Results 1  10
of
181
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 1485 (54 self)
 Add to MetaCart
In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an αexpansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second
NonUniform Random Variate Generation
, 1986
"... Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various ..."
Abstract

Cited by 716 (21 self)
 Add to MetaCart
Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE Transactions on Medical. Imaging
, 2001
"... Abstract—The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrin ..."
Abstract

Cited by 320 (13 self)
 Add to MetaCart
Abstract—The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limitation—no spatial information is taken into account. This causes the FM model to work only on welldefined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM modelbased methods produce unreliable results. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM modelbased approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRFEM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRFEM framework can easily be combined with other techniques. As an example, we show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a threedimensional fully automated approach for brain MR image segmentation. Index Terms—Bias field correction, expectationmaximization, hidden Markov random field, MRI, segmentation. I.
A comparative study of energy minimization methods for Markov random fields
 IN ECCV
, 2006
"... One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Markov Ran ..."
Abstract

Cited by 264 (26 self)
 Add to MetaCart
One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Markov Random Fields (MRF’s), the resulting energy minimization problems were widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the topperforming stereo methods. Unfortunately, most papers define their own energy function, which is minimized with a specific algorithm of their choice. As a result, the tradeoffs among different energy minimization algorithms are not well understood. In this paper we describe a set of energy minimization benchmarks, which we use to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods—graph cuts, LBP, and treereweighted message passing—as well as the wellknown older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching and interactive segmentation. We also provide a generalpurpose software interface that allows vision researchers to easily switch between optimization methods with minimal overhead. We expect that the availability of our benchmarks and interface will make it significantly easier for vision researchers to adopt the best method for their specific problems. Benchmarks, code, results and images are available at
The State of Record Linkage and Current Research Problems
 Statistical Research Division, U.S. Census Bureau
, 1999
"... This paper provides an overview of methods and systems developed for record linkage. Modern record linkage begins with the pioneering work of Newcombe and is especially based on the formal mathematical model of Fellegi and Sunter. In their seminal work, Fellegi and Sunter introduced many powerful id ..."
Abstract

Cited by 238 (7 self)
 Add to MetaCart
This paper provides an overview of methods and systems developed for record linkage. Modern record linkage begins with the pioneering work of Newcombe and is especially based on the formal mathematical model of Fellegi and Sunter. In their seminal work, Fellegi and Sunter introduced many powerful ideas for estimating record linkage parameters and other ideas that still influence record linkage today. Record linkage research is characterized by its synergism of statistics, computer science, and operations research. Many difficult algorithms have been developed and put in software systems. Record linkage practice is still very limited. Some limits are due to existing software. Other limits are due to the difficulty in automatically estimating matching parameters and error rates, with current research highlighted by the work of Larsen and Rubin. Keywords: computer matching, modeling, iterative fitting, string comparison, optimization RsSUMs Cet article donne une vue d'ensemble sur les ...
Asymptotics for Lassotype estimators
, 2000
"... this paper, we consider the asymptotic behaviour of regression estimators that minimize the residual sum of squares plus a penalty proportional to ..."
Abstract

Cited by 154 (3 self)
 Add to MetaCart
this paper, we consider the asymptotic behaviour of regression estimators that minimize the residual sum of squares plus a penalty proportional to
Efficient GraphBased Energy Minimization Methods In Computer Vision
, 1999
"... ms (we show that exact minimization in NPhard in these cases). These algorithms produce a local minimum in interesting large move spaces. Furthermore, one of them nds a solution within a known factor from the optimum. The algorithms are iterative and compute several graph cuts at each iteration. Th ..."
Abstract

Cited by 87 (5 self)
 Add to MetaCart
ms (we show that exact minimization in NPhard in these cases). These algorithms produce a local minimum in interesting large move spaces. Furthermore, one of them nds a solution within a known factor from the optimum. The algorithms are iterative and compute several graph cuts at each iteration. The running time at each iteration is eectively linear due to the special graph structure. In practice it takes just a few iterations to converge. Moreover most of the progress happens during the rst iteration. For a certain piecewise constant prior we adapt the algorithms developed for the piecewise smooth prior. One of them nds a solution within a factor of two from the optimum. In addition we develop a third algorithm which nds a local minimum in yet another move space. We demonstrate the eectiveness of our approach on image restoration, stereo, and motion. For the data with ground truth, our methods signicantly outperform standard methods. Biographical Sketch Olga
On the Convergence of Monte Carlo Maximum Likelihood Calculations
 Journal of the Royal Statistical Society B
, 1992
"... Monte Carlo maximum likelihood for normalized families of distributions (Geyer and Thompson, 1992) can be used for an extremely broad class of models. Given any family f h ` : ` 2 \Theta g of nonnegative integrable functions, maximum likelihood estimates in the family obtained by normalizing the the ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Monte Carlo maximum likelihood for normalized families of distributions (Geyer and Thompson, 1992) can be used for an extremely broad class of models. Given any family f h ` : ` 2 \Theta g of nonnegative integrable functions, maximum likelihood estimates in the family obtained by normalizing the the functions to integrate to one can be approximated by Monte Carlo, the only regularity conditions being a compactification of the parameter space such that the the evaluation maps ` 7! h ` (x) remain continuous. Then with probability one the Monte Carlo approximant to the log likelihood hypoconverges to the exact log likelihood, its maximizer converges to the exact maximum likelihood estimate, approximations to profile likelihoods hypoconverge to the exact profile, and level sets of the approximate likelihood (support regions) converge to the exact sets (in Painlev'eKuratowski set convergence). The same results hold when there are missing data (Thompson and Guo, 1991, Gelfand and Carlin, 19...
Hidden Markov models and disease mapping
 Journal of the American Statistical Association
, 2001
"... We present new methodology to extend Hidden Markov models to the spatial domain, and use this class of models to analyse spatial heterogeneity of count data on a rare phenomenon. This situation occurs commonly in many domains of application, particularly in disease mapping. We assume that the counts ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
We present new methodology to extend Hidden Markov models to the spatial domain, and use this class of models to analyse spatial heterogeneity of count data on a rare phenomenon. This situation occurs commonly in many domains of application, particularly in disease mapping. We assume that the counts follow a Poisson model at the lowest level of the hierarchy, and introduce a finite mixture model for the Poisson rates at the next level. The novelty lies in the model for allocation to the mixture components, which follows a spatially correlated process, the Potts model, and in treating the number of components of the spatial mixture as unknown. Inference is performed in a Bayesian framework using reversible jump MCMC. The model introduced can be viewed as a Bayesian semiparametric approach to specifying exible spatial distribution in hierarchical models. Performance of the model and comparison with an alternative wellknown Markov random field specification for the Poisson rates are demonstrated on synthetic data sets. We show that our allocation model avoids the problem of oversmoothing in cases where the underlying rates exhibit discontinuities, while giving equally good results in cases of smooth gradientlike or highly autocorrelated rates. The methodology is illustrated on an epidemiological application to data on a rare cancer in France.
Ice floe identification in satellite images using mathematical morphology and clustering about principal curves
 JASA
, 1992
"... Identification of ice floes and their outlines in satellite images is important for understanding physical processes in the polar regions, for transportation in icecovered seas and for the design of offshore structures intended to survive in the presence of ice. At present this is done manually, ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
Identification of ice floes and their outlines in satellite images is important for understanding physical processes in the polar regions, for transportation in icecovered seas and for the design of offshore structures intended to survive in the presence of ice. At present this is done manually, a long and tedious process which precludes full use of the great volume of relevant images now available. We describe an automatic and accurate method for identifying ice floes and their outlines. Floe outlines are modeled as closed principal curves (Hastie and Stuetzle, 1989), a flexible class of smooth nonparametric curves. We propose a robust method of estimating closed principal curves which reduces both bias and variance. Initial estimates of floe outlines come from the erosionpropagation (EP) algorithm, which combines erosion from mathematical morphology with local propagation of information about floe edges. The edge pixels from the EP algorithm are grouped into floe outlines