Results 1  10
of
861
Normalized Cuts and Image Segmentation
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2000
"... ..."
(Show Context)
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 1976 (61 self)
 Add to MetaCart
(Show Context)
In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an αexpansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second
Scalespace and edge detection using anisotropic diffusion
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1990
"... AbstractThe scalespace technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the “semantically meaningful ” edges at coarse sca ..."
Abstract

Cited by 1727 (1 self)
 Add to MetaCart
AbstractThe scalespace technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the “semantically meaningful ” edges at coarse scales. In this paper we suggest a new definition of scalespace, and introduce a class of algorithms that realize it using a diffusion process. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing in preference to interregion smoothing. It is shown that the “no new maxima should be generated at coarse scales ” property of conventional scale space is preserved. As the region boundaries in our approach remain sharp, we obtain a high quality edge detector which successfully exploits global information. Experimental results are shown on a number of images. The algorithm involves elementary, local operations replicated over the image making parallel hardware implementations feasible. Index TermsAdaptive filtering, analog VLSI, edge detection, edge enhancement, nonlinear diffusion, nonlinear filtering, parallel algo
Geodesic Active Contours
, 1997
"... A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both in ..."
Abstract

Cited by 1331 (47 self)
 Add to MetaCart
A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric is defined by the image content. This geodesic approach for object segmentation allows to connect classical "snakes" based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved, allowing stable boundary detection when their gradients suffer from large variations, including gaps. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. The scheme was implemented using an efficient algorithm for curve evolution. Experimental results of applying the scheme to real images including objects with holes and medical data imagery demonstrate its power. The results may be extended to 3D object segmentation as well.
Shape modeling with front propagation: A level set approach
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1995
"... Abstract Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This paper presents a new approach to shape modeling which retains some of the attractive features of existing methods ..."
Abstract

Cited by 759 (19 self)
 Add to MetaCart
Abstract Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This paper presents a new approach to shape modeling which retains some of the attractive features of existing methods and overcomes some of their limitations. Our techniques can be applied to model arbitrarily complex shapes, which include shapes with significant protrusions, and to situations where no a priori assumption about the object’s topology is made. A single instance of our model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. This method is based on the ideas developed by Osher and Sethian to model propagating solidhiquid interfaces with curvaturedependent speeds. The interface (front) is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. It is moved by solving a “HamiltonJacob? ’ type equation written for a function in which the interface is a particular level set. A speed term synthesizpd from the image is used to stop the interface in the vicinity of object boundaries. The resulting equation of motion is solved by employing entropysatisfying upwind finite difference schemes. We present a variety of ways of computing evolving front, including narrow bands, reinitializations, and different stopping criteria. The efficacy of the scheme is demonstrated with numerical experiments on some synthesized images and some low contrast medical images. Index Terms Shape modeling, shape recovery, interface motion, level sets, hyperbolic conservation laws, HamiltonJacobi
Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multiband Image Segmentation
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1996
"... We present a novel statistical and variational approach to image segmentation based on a new algorithm named region competition. This algorithm is derived by minimizing a generalized Bayes/MDL criterion using the variational principle. The algorithm is guaranteed to converge to a local minimum and c ..."
Abstract

Cited by 742 (20 self)
 Add to MetaCart
(Show Context)
We present a novel statistical and variational approach to image segmentation based on a new algorithm named region competition. This algorithm is derived by minimizing a generalized Bayes/MDL criterion using the variational principle. The algorithm is guaranteed to converge to a local minimum and combines aspects of snakes/balloons and region growing. Indeed the classic snakes/balloons and region growing algorithms can be directly derived from our approach. We provide theoretical analysis of region competition including accuracy of boundary location, criteria for initial conditions, and the relationship to edge detection using filters. It is straightforward to generalize the algorithm to multiband segmentation and we demonstrate it on grey level images, color images and texture images. The novel color model allows us to eliminate intensity gradients and shadows, thereby obtaining segmentation based on the albedos of objects. It also helps detect highlight regions. 1 Division of Appli...
On active contour models and balloons
 CVGIP: Image
"... The use.of energyminimizing curves, known as “snakes, ” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321331). We present a model of deformation which solves some of the problems encountered with the original method. ..."
Abstract

Cited by 556 (43 self)
 Add to MetaCart
The use.of energyminimizing curves, known as “snakes, ” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections. 0 1991 Academic press, 1~. I.
Markov Random Field Models in Computer Vision
, 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract

Cited by 482 (18 self)
 Add to MetaCart
(Show Context)
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The latter relates to how data is observed and is problem domain dependent. The former depends on how various prior constraints are expressed. Markov Random Field Models (MRF) theory is a tool to encode contextual constraints into the prior probability. This paper presents a unified approach for MRF modeling in low and high level computer vision. The unification is made possible due to a recent advance in MRF modeling for high level object recognition. Such unification provides a systematic approach for vision modeling based on sound mathematical principles. 1 Introduction Since its beginning in early 1960's, computer vision research has been evolving from heuristic design of algorithms to syste...
A comparative study of energy minimization methods for Markov random fields
 IN ECCV
, 2006
"... One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Markov Ran ..."
Abstract

Cited by 384 (36 self)
 Add to MetaCart
One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Markov Random Fields (MRF’s), the resulting energy minimization problems were widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the topperforming stereo methods. Unfortunately, most papers define their own energy function, which is minimized with a specific algorithm of their choice. As a result, the tradeoffs among different energy minimization algorithms are not well understood. In this paper we describe a set of energy minimization benchmarks, which we use to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods—graph cuts, LBP, and treereweighted message passing—as well as the wellknown older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching and interactive segmentation. We also provide a generalpurpose software interface that allows vision researchers to easily switch between optimization methods with minimal overhead. We expect that the availability of our benchmarks and interface will make it significantly easier for vision researchers to adopt the best method for their specific problems. Benchmarks, code, results and images are available at
A database and evaluation methodology for optical flow
 In Proceedings of the IEEE International Conference on Computer Vision
, 2007
"... The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex n ..."
Abstract

Cited by 373 (23 self)
 Add to MetaCart
(Show Context)
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the groundtruth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high framerate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several wellknown methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at