Results 1  10
of
151
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 1377 (51 self)
 Add to MetaCart
In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an αexpansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second
A taxonomy and evaluation of dense twoframe stereo correspondence algorithms
 International Journal of Computer Vision
, 2002
"... Abstract. Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, twoframe ..."
Abstract

Cited by 1023 (20 self)
 Add to MetaCart
Abstract. Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, twoframe stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a standalone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today’s bestperforming stereo algorithms.
Bayesian Interpolation
 Neural Computation
, 1991
"... Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data. T ..."
Abstract

Cited by 519 (18 self)
 Add to MetaCart
Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other problems. Regularising constants are set by examining their posterior probability distribution. Alternative regularisers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. `Occam's razor' is automatically embodied by this framework. The way in which Bayes infers the values of regularising constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling. 1 Data modelling and Occam's razor In science, a central task is to develop and compare models to a...
Iterative point matching for registration of freeform curves and surfaces
, 1994
"... A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in ma ..."
Abstract

Cited by 479 (6 self)
 Add to MetaCart
A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subsetsubset matching. A leastsquares technique is used to estimate 3D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate.
Learning lowlevel vision
 International Journal of Computer Vision
, 2000
"... We show a learningbased method for lowlevel vision problems. We setup a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently prop ..."
Abstract

Cited by 466 (25 self)
 Add to MetaCart
We show a learningbased method for lowlevel vision problems. We setup a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently propagate image information. Monte Carlo simulations justify this approximation. We apply this to the \superresolution " problem (estimating high frequency details from a lowresolution image), showing good results. For the motion estimation problem, we show resolution of the aperture problem and llingin arising from application of the same probabilistic machinery.
Deformable models in medical image analysis: A survey
 Medical Image Analysis
, 1996
"... This article surveys deformable models, a promising and vigorously researched computerassisted medical image analysis technique. Among modelbased techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics, and approximation theory. They hav ..."
Abstract

Cited by 452 (7 self)
 Add to MetaCart
This article surveys deformable models, a promising and vigorously researched computerassisted medical image analysis technique. Among modelbased techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics, and approximation theory. They have proven to be effective in segmenting, matching, and tracking anatomic structures by exploiting (bottomup) constraints derived from the image data together with (topdown) a priori knowledge about the location, size, and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the modelbased image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, includingsegmentation, shape representation, matching, and motion tracking.
Markov Random Field Models in Computer Vision
, 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract

Cited by 386 (18 self)
 Add to MetaCart
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The latter relates to how data is observed and is problem domain dependent. The former depends on how various prior constraints are expressed. Markov Random Field Models (MRF) theory is a tool to encode contextual constraints into the prior probability. This paper presents a unified approach for MRF modeling in low and high level computer vision. The unification is made possible due to a recent advance in MRF modeling for high level object recognition. Such unification provides a systematic approach for vision modeling based on sound mathematical principles. 1 Introduction Since its beginning in early 1960's, computer vision research has been evolving from heuristic design of algorithms to syste...
Kalman Filterbased Algorithms for Estimating Depth from Image Sequences
, 1989
"... Using known camera motion to estimate depth from image sequences is an important problem in robot vision. Many applications of depthfrommotion, including navigation and manipulation, require algorithms that can estimate depth in an online, incremental fashion. This requires a representation that ..."
Abstract

Cited by 213 (25 self)
 Add to MetaCart
Using known camera motion to estimate depth from image sequences is an important problem in robot vision. Many applications of depthfrommotion, including navigation and manipulation, require algorithms that can estimate depth in an online, incremental fashion. This requires a representation that records the uncertainty in depth estimates and a mechanism that integrates new measurements with existing depth estimates to reduce the uncertainty over time. Kalman filtering provides this mechanism. Previous applications of Kalman filtering to depthfrommotion have been limited to estimating depth at the location of a sparse set of features. In this paper, we introduce a new, pixelbased (iconic) algorithm that estimates depth and depth uncertainty at each pixel and incrementally refines these estimates over time. We describe the algorithm and contrast its formulation and performance to that of a featurebased Kalman filtering algorithm. We compare the performance of the two approaches by analyzing their theoretical convergence rates, by conducting quantitative experiments with images of a flat poster, and by conducting qualitative experiments with images of a realistic outdoorscene model. The results show that the new method is an effective way to extract depth from lateral camera translations. This approach can be extended to incorporate general motion and to integrate other sources of information, such as stereo. The algorithms we have developed, which combine Kalman filtering with iconic descriptions of depth, therefore can serve as a useful and general framework for lowlevel dynamic vision.
Probability Distributions of Optical Flow
 PROC. CONF. COMP. VISION AND PATT. RECOGNITION
, 1991
"... Gradient methods are widely used in the computation of optical flow. We discuss extensions of these methods which compute probability distributions of optical flow. The use of distributions allows representation of the uncertainties inherent in the optical flow computation, facilitating the combinat ..."
Abstract

Cited by 176 (16 self)
 Add to MetaCart
Gradient methods are widely used in the computation of optical flow. We discuss extensions of these methods which compute probability distributions of optical flow. The use of distributions allows representation of the uncertainties inherent in the optical flow computation, facilitating the combination with information from other sources. We compute distributed optical flow for a synthetic image sequence and demonstrate that the probabilistic model accounts for the errors in the flow estimates. We also compute distributed optical flow for a real image sequence. 1 Introduction The recovery of motion information from visual input is an important task for both natural and artificial vision systems. Most models for the analysis of visual motion begin by extracting twodimensional motion information. In particular, computer vision techniques typically compute twodimensional optical flowvectors which describe the motion of each portion of the image in the image plane. Methods for the re...
Splinebased image registration
 IN PROC. IEEE CONFERENCE ON COMPUTER VISION PATTERN RECOGNITION
, 1994
"... ..."