Results 1  10
of
40
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 1384 (52 self)
 Add to MetaCart
In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an αexpansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second
Robust multiresolution estimation of parametric motion models
 Jal of Vis. Comm. and Image Representation
, 1995
"... This paper describes a method to estimate parametric motion models. Motivations for the use of such models are on one hand their efficiency, which has been demonstrated in numerous contexts such as estimation, segmentation, tracking and interpretation of motion, and on the other hand, their low comp ..."
Abstract

Cited by 274 (48 self)
 Add to MetaCart
This paper describes a method to estimate parametric motion models. Motivations for the use of such models are on one hand their efficiency, which has been demonstrated in numerous contexts such as estimation, segmentation, tracking and interpretation of motion, and on the other hand, their low computational cost compared to optical flow estimation. However, it is important to have the best accuracy for the estimated parameters, and to take into account the problem of multiple motion. We have therefore developed two robust estimators in a multiresolution framework. Numerical results support this approach, as validated by the use of these algorithms on complex sequences. 1
Constructing Simple Stable Descriptions for Image Partitioning
, 1994
"... A new formulation of the image partitioning problem is presented: construct a complete and stable description of an image, in terms of a specified descriptive language, that is simplest in the sense of being shortest. We show that a descriptive language limited to a loworder polynomial description ..."
Abstract

Cited by 223 (5 self)
 Add to MetaCart
A new formulation of the image partitioning problem is presented: construct a complete and stable description of an image, in terms of a specified descriptive language, that is simplest in the sense of being shortest. We show that a descriptive language limited to a loworder polynomial description of the intensity variation within each region and a chaincodelike description of the region boundaries yields intuitively satisfying partitions for a wide class of images. The advantage of this formulation is that it can be extended to deal with subsequent steps of the imageunderstanding problem (or to deal with other image attributes, such as texture) in a natural way by augmenting the descriptive language. Experiments performed on a variety of both real and synthetic images demonstrate the superior performance of this approach over partitioning techniques based on clustering vectors of local image attributes and standard edgedetection techniques. 1 Introduction The partitioning proble...
ML parameter estimation for Markov random fields, with applications to Bayesian tomography
 IEEE Trans. on Image Processing
, 1998
"... Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (so ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (sometimes referred to as hyperparameters) is difficult in practice for two reasons: 1) Direct parameter estimation for MRF’s is known to be mathematically and numerically challenging. 2) Parameters can not be directly estimated because the true image crosssection is unavailable. In this paper, we propose a computationally efficient scheme to address both these difficulties for a general class of MRF models, and we derive specific methods of parameter estimation for the MRF model known as a generalized Gaussian MRF (GGMRF). The first section of the paper derives methods of direct estimation of scale and shape parameters for a general continuously valued MRF. For the GGMRF case, we show that the ML estimate of the scale parameter, σ, has a simple closed form solution, and we present an efficient scheme for computing the ML estimate of the shape parameter, p, by an offline numerical computation of the dependence of the partition function on p.
An experimental comparison of stereo algorithms
 Vision Algorithms: Theory and Practice, number 1883 in LNCS
, 1999
"... Abstract. While many algorithms for computing stereo correspondence have been proposed, there has been very little work on experimentally evaluating algorithm performance, especially using real (rather than synthetic) imagery. In this paper we propose an experimental comparison of several different ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
Abstract. While many algorithms for computing stereo correspondence have been proposed, there has been very little work on experimentally evaluating algorithm performance, especially using real (rather than synthetic) imagery. In this paper we propose an experimental comparison of several different stereo algorithms. We use real imagery, and explore two different methodologies, with different strengths and weaknesses. Our first methodology is based upon manual computation of dense ground truth. Here we make use of a two stereo pairs: one of these, from the University of Tsukuba, contains mostly frontoparallel surfaces; while the other, which we built, is a simple scene with a slanted surface. Our second methodology uses the notion of prediction error, which is the ability of a disparity map to predict an (unseen) third image, taken from a known camera position with respect to the input pair. We present results for both correlationstyle stereo algorithms and techniques based on global methods such as energy minimization. Our experiments suggest that the two methodologies give qualitatively consistent results. Source images and additional materials, such as the implementations of various algorithms, are available on the web from
Edgepreserving tomographic reconstruction with nonlocal regularization
 In Proc. IEEE Intl. Conf. on Image Processing
, 2002
"... Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the illposedness of the reconstruction problem, a roughness penalty is often impo ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the illposedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edgepreserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using levelsets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a spacevariant quadratic cost function to update the image estimate. For the PET transmission reconstruction application, we compare the biasvariance tradeoff of this method with that of a “conventional” penalizedlikelihood algorithm with local Huber roughness penalty.
Inversion Of LargeSupport IllPosed Linear . . .
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1998
"... We propose a method for the reconstruction of signals and images observed partially through a linear operator with a large support (e.g., a Fourier transform on a sparse set). This inverse problem is illposed and we resolve it by incorporating the prior information that the reconstructed object ..."
Abstract

Cited by 20 (12 self)
 Add to MetaCart
We propose a method for the reconstruction of signals and images observed partially through a linear operator with a large support (e.g., a Fourier transform on a sparse set). This inverse problem is illposed and we resolve it by incorporating the prior information that the reconstructed objects are composed of smooth regions separated by sharp transitions. This feature is modeled by a piecewise Gaussian (PG) Markov random field (MRF), known also as the weakstring in one dimension and the weakmembrane in two dimensions. The reconstruction is defined as the maximum a posteriori estimate. The prerequisite
Phase unwrapping via graph cuts
 IEEE Transactions on Image Processing
, 2007
"... Abstract — Phase unwrapping is the inference of absolute phase from modulo2π phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are firstorder Markov random fields. We provide an exact energy minimization algorithm, whenever th ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Abstract — Phase unwrapping is the inference of absolute phase from modulo2π phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are firstorder Markov random fields. We provide an exact energy minimization algorithm, whenever the corresponding clique potentials are convex, namely for the phase unwrapping classical L p norm, with p ≥ 1. Its complexity is KT(n, 3n), where K is the length of the absolute phase domain measured in 2π units and T (n, m) is the complexity of a maxflow computation in a graph with n nodes and m edges. For nonconvex clique potentials, often used owing to their discontinuity preserving ability, we face an NPhard problem for which we devise an approximate solution. Both algorithms solve integer optimization problems, by computing a sequence of binary optimizations, each one solved by graph cut techniques. Accordingly, we name the two algorithms PUMA, for phase unwrapping maxflow/mincut. A set of experimental results illustrates the effectiveness of the proposed approach and its competitiveness in comparison with stateoftheart phase unwrapping algorithms. Index Terms — Phase unwrapping, energy minimization, integer optimization, submodularity, graph cuts, image
A New Algorithm for Energy Minimization with Discontinuities
 IN INTERNATIONAL WORKSHOP ON ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION
, 1999
"... Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. These tasks can be formulated as energy minimization problems. In this paper, we consider a natural class of energy functions that permits discontinuities. Computing the exact minimum is NPhard. We have deve ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. These tasks can be formulated as energy minimization problems. In this paper, we consider a natural class of energy functions that permits discontinuities. Computing the exact minimum is NPhard. We have developed a new approximation algorithm based on graph cuts. The solution it generates is guaranteed to be within a factor of 2 of the energy function's global minimum. Our method produces a local minimum with respect to a certain move space. In this move space, a single move is allowed to switch an arbitrary subset of pixels to one common label. If this common label is α then such a move expands the domain of α in the image. At each iteration our algorithm efficiently chooses the expansion move that gives the largest decrease in the energy. We apply our method to the stereo matching problem, and obtain promising experimental results. Empirically, the new technique outperforms our previous algorithm [6] both in terms of running time and output quality.