Results 1  10
of
19
Projection Based MEstimators
, 2009
"... Random Sample Consensus (RANSAC) is the most widely used robust regression algorithm in computer vision. However, RANSAC has a few drawbacks which make it difficult to use for practical applications. Some of these problems have been addressed through improved sampling algorithms or better cost funct ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Random Sample Consensus (RANSAC) is the most widely used robust regression algorithm in computer vision. However, RANSAC has a few drawbacks which make it difficult to use for practical applications. Some of these problems have been addressed through improved sampling algorithms or better cost functions, but an important difficulty still remains. The algorithm is not user independent, and requires knowledge of the scale of the inlier noise. We propose a new robust regression algorithm, the projection based Mestimator (pbM). The pbM algorithm is derived by building a connection to the theory of kernel density estimation and this leads to an improved cost function, which gives better performance. Furthermore, pbM is user independent and does not require any knowledge of the scale of noise corrupting the inliers. We propose a general framework for the pbM algorithm which can handle heteroscedastic data and multiple linear constraints on each data point through the use of Grassmann manifold theory. The performance of pbM is compared with RANSAC and MEstimator Sample Consensus (MSAC) on various real problems. It is shown that pbM gives better results than RANSAC and MSAC in spite of being user independent.
Blogs: Balanced local and global search for nondegenerate two view epipolar geometry
, 2009
"... ..."
(Show Context)
Photo Sequencing
"... Capturing the highlights of a dynamic event Analyzing/Visualizing the dynamic content using still imagesECCV’12 ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Capturing the highlights of a dynamic event Analyzing/Visualizing the dynamic content using still imagesECCV’12
Efficient image retrieval for 3D structures
, 2010
"... Large scale image retrieval systems for speci c objects generally employ visual words together with a ranking based on a geometric relation between the query and target images. Previous work has used planar homographies for this geometric relation. Here we replace the planar transformation by epipol ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Large scale image retrieval systems for speci c objects generally employ visual words together with a ranking based on a geometric relation between the query and target images. Previous work has used planar homographies for this geometric relation. Here we replace the planar transformation by epipolar geometry in order to improve the retrieval performance for 3D structures. To this end, we introduce a new minimal solution for computing the af ne fundamental matrix. The solution requires only two corresponding elliptical regions. Unlike previous approaches it does not require the rotation of the image patches, and ensures that the necessary epipolar tangency constraints are satis ed. The solution is well suited for real time reranking in large scale image retrieval, since (i) elliptical correspondences are readily available from the af ne region detections, and (ii) the use of only two region correspondences is very ef cient in a RANSAC framework where the number of samples required grows exponentially with sample size. We demonstrate a gain in computational ef ciency (over other methods of solution) without a loss in quality of the estimated epipolar geometry. We present a quantitative performance evaluation on the Oxford and Paris image retrieval benchmarks, and demonstrate that retrieval of 3D structures is indeed improved.
SWIGS: A Swift Guided Sampling Method
 In Proc. IEEE Computer Vision and Pattern Recognition
"... We present SWIGS, a Swift and efficient Guided Sampling method for robust model estimation from image feature correspondences. Our method leverages the accuracy of our new confidence measure (MRRayleigh), which assigns a correctnessconfidence to a putative correspondence in an online fashion. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We present SWIGS, a Swift and efficient Guided Sampling method for robust model estimation from image feature correspondences. Our method leverages the accuracy of our new confidence measure (MRRayleigh), which assigns a correctnessconfidence to a putative correspondence in an online fashion. MRRayleigh is inspired by MetaRecognition (MR), an algorithm that aims to predict when a classifier’s outcome is correct. We demonstrate that by using a Rayleigh distribution, the prediction accuracy of MR can be improved considerably. Our experiments show that MRRayleigh tends to predict better than the oftenused Lowe’s ratio, Brown’s ratio, and the standard MR under a range of imaging conditions. Furthermore, our homography estimation experiment demonstrates that SWIGS performs similarly or better than other guided sampling methods while requiring fewer iterations, leading to fast and accurate model estimates. 1.
M.: EVSAC: accelerating hypotheses generation by modeling matching scores with extreme value theory
 In: IEEE ICCV
, 2013
"... Algorithms based on RANSAC that estimate models using feature correspondences between images can slow down tremendously when the percentage of correct correspondences (inliers) is small. In this paper, we present a probabilistic parametric model that allows us to assign confidence values for eac ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Algorithms based on RANSAC that estimate models using feature correspondences between images can slow down tremendously when the percentage of correct correspondences (inliers) is small. In this paper, we present a probabilistic parametric model that allows us to assign confidence values for each matching correspondence and therefore accelerates the generation of hypothesis models for RANSAC under these conditions. Our framework leverages Extreme Value Theory to accurately model the statistics of matching scores produced by a nearestneighbor feature matcher. Using a new algorithm based on this model, we are able to estimate accurate hypotheses with RANSAC at low inlier ratios significantly faster than previous stateoftheart approaches, while still performing comparably when the number of inliers is large. We present results of homography and fundamental matrix estimation experiments for both SIFT and SURF matches that demonstrate that our method leads to accurate and fast model estimations. 1.
Image Matching Using Photometric Information
"... Image matching is an essential task in many computer vision applications. It is obvious that thorough utilization of all available information is critical for the success of matching algorithms. However most popular matching methods do not incorporate effectively photometric data. Some algorithms ar ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Image matching is an essential task in many computer vision applications. It is obvious that thorough utilization of all available information is critical for the success of matching algorithms. However most popular matching methods do not incorporate effectively photometric data. Some algorithms are based on geometric, color invariant features, thus completely neglecting available photometric information. Others assume that color does not differ significantly in the two images; that assumption may be wrong when the images are not taken at the same time, for example when a recently taken image is compared with a database. This paper introduces a method for using color information in image matching tasks. Initially the images are segmented using an offtheshelf segmentation process (EDISON). No assumptions are made on the quality of the segmentation. Then the algorithm employs a model for natural illumination change to define the probability of two segments to originate from the same surface. When additional information is supplied (for example suspected corresponding point features in both images), the probabilities are updated. We show that the probabilities can easily be utilized in any existing image matching system. We propose a technique to make use of them in a SIFTbased algorithm. The technique’s capabilities are demonstrated on real images, where it causes a significant improvement in comparison with the original SIFT results in the percentage of correct matches found.
S.: Spacetime tradeoffs in photo sequencing
 In: ICCV (2013
"... Photosequencing is the problem of recovering the temporal order of a set of still images of a dynamic event, taken asynchronously by a set of uncalibrated cameras. Solving this problem is a first, crucial step for analyzing (or visualizing) the dynamic content of the scene captured by a large num ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Photosequencing is the problem of recovering the temporal order of a set of still images of a dynamic event, taken asynchronously by a set of uncalibrated cameras. Solving this problem is a first, crucial step for analyzing (or visualizing) the dynamic content of the scene captured by a large number of freely moving spectators. We propose a geometric based solution, followed by rank aggregation to the photosequencing problem. Our algorithm trades spatial certainty for temporal certainty. Whereas the previous solution proposed by [4] relies on two images taken from the same static camera to eliminate uncertainty in space, we drop the staticcamera assumption and replace it with temporal information available from images taken from the same (moving) camera. Our method thus overcomes the limitation of the staticcamera assumption, and scales much better with the duration of the event and the spread of cameras in space. We present successful results on challenging real data sets and large scale synthetic data (250 images). 1.
Keywords:
"... In this paper we present a new method to group selfsimilar SIFT features in images. The aim is to automatically build groups of all SIFT features with the same semantics in an image. To achieve this a new distance between SIFT feature vectors taking into account their orientation and scale is intro ..."
Abstract
 Add to MetaCart
In this paper we present a new method to group selfsimilar SIFT features in images. The aim is to automatically build groups of all SIFT features with the same semantics in an image. To achieve this a new distance between SIFT feature vectors taking into account their orientation and scale is introduced. The methods are presented inthe context of recognition of buildings. A firstevaluation shows promising results. 1
Author manuscript, published in "PSIVT, Guanajuato: Mexico (2013)" Singular Vector Methods for Fundamental Matrix Computation
, 2013
"... Abstract. The normalized eightpoint algorithm is broadly used for the computation of the fundamental matrix between two images given a set of correspondences. However, it performs poorly for lowsize datasets due to the way in which the ranktwo constraint is imposed on the fundamental matrix. We p ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The normalized eightpoint algorithm is broadly used for the computation of the fundamental matrix between two images given a set of correspondences. However, it performs poorly for lowsize datasets due to the way in which the ranktwo constraint is imposed on the fundamental matrix. We propose two new algorithms to enforce the ranktwo constraint on the fundamental matrix in closed form. The first one restricts the projection on the manifold of fundamental matrices along the most favorable direction with respect to algebraic error. Its complexity is akin to the classical seven point algorithm. The second algorithm relaxes the search to the best plane with respect to the algebraic error. The minimization of this error amounts to finding the intersection of two bivariate cubic polynomial curves. These methods are based on the minimization of the algebraic error and perform equally well for large datasets. However, we show through synthetic and real experiments that the proposed algorithms compare favorably with the normalized eightpoint algorithm for lowsize datasets.