Results 1  10
of
71
Overall view regarding fundamental matrix estimation
 Image and Vision Computing
, 2003
"... Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article is a fresh look in the subject that overview classic and latest presented methods of fundamental matrix estimation which have been classified into linear methods, ite ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
(Show Context)
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article is a fresh look in the subject that overview classic and latest presented methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analyzed in synthetic and real images. A summary including experimental results and algorithmic details is given and the whole code is available in Internet.
Estimation of nonlinear errorsinvariables models for computer vision applications
 IEEE Trans. Patt. Anal. Mach. Intell
, 2006
"... Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer visi ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
(Show Context)
Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errorsinvariables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the LevenbergMarquardt method, the standard approach toward estimating nonlinear models. Index Terms—Nonlinear least squares, heteroscedastic regression, camera calibration, 3D rigid motion, uncalibrated vision. 1 MODELING COMPUTER VISION PROBLEMS SOLVING most computer vision problems requires the estimation of a set of parameters from noisy measurements using a statistical model. A statistical model provides a mathematical description of a problem in terms of a constraint equation relating the measurements to the
From FNS to HEIV: A Link between Two Vision Parameter Estimation Methods
 IEEE Trans. Pattern Anal. Mach. Intell
, 2004
"... Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essenti ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essentially equivalent, solving a common underlying equation via different means. The analysis is driven by the search for a nondegenerate form of a certain generalized eigenvalue problem, and effectively leads to a new derivation of the relevant case of the HEIV algorithm. This work may be seen as an extension of previous efforts to rationalize and interrelate a spectrum of estimators, including the renormalization method of Kanatani and the normalized eightpoint method of Hartley. Index Terms — Statistical methods, maximum likelihood, (un)constrained minimization, fundamental matrix, epipolar equation I.
A New Constrained Parameter Estimator For Computer Vision Applications
"... Previous work of the authors developed a theoretically wellfounded scheme (FNS) for finding the minimiser of a class of cost functions. Various problems in video analysis, stereo vision, ellipsefitting, etc, may be expressed in terms of finding such a minimiser. However, in common with many other ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Previous work of the authors developed a theoretically wellfounded scheme (FNS) for finding the minimiser of a class of cost functions. Various problems in video analysis, stereo vision, ellipsefitting, etc, may be expressed in terms of finding such a minimiser. However, in common with many other approaches, it is necessary to correct the minimiser as a postprocess if an ancillary constraint is also to be satisfied. In this paper we develop the first integrated scheme (CFNS) for simultaneously minimising the cost function and satisfying the constraint. Preliminary experiments in the domain of fundamentalmatrix estimation show that CFNS generates rank2 estimates with smaller cost function values than rank2 corrected FNS estimates. Furthermore, when compared with the HartleyZisserman Gold Standard method, CFNS is seen to generate results of comparable quality in a fraction of the time.
Revisiting Hartley's Normalized EightPoint Algorithm
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract — Hartley’s eightpoint algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the e ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Hartley’s eightpoint algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eightpoint algorithm that results from using normalized data. It is first established that the normalized algorithm acts to minimize a specific cost function. It is then shown that this cost function is statistically better founded than the cost function associated with the nonnormalized algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework. Index Terms — Epipolar equation, fundamental matrix, eightpoint algorithm, data normalization
Y.: Performance evaluation of iterative geometric fitting algorithms
 Comput. Stat. Data Anal
, 2007
"... The convergence performance of typical numerical schemes for geometric fitting for computer vision applications is compared. First, the problem and the associated KCR lower bound are stated. Then, three well known fitting algorithms are described: FNS, HEIV, and renormalization. To these, we add a ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
The convergence performance of typical numerical schemes for geometric fitting for computer vision applications is compared. First, the problem and the associated KCR lower bound are stated. Then, three well known fitting algorithms are described: FNS, HEIV, and renormalization. To these, we add a special variant of GaussNewton iterations. For initialization of iterations, random choice, least squares, and Taubin’s method are tested. Simulation is conducted for fundamental matrix computation and ellipse fitting, which reveals different characteristics of each method. c°2007 Published by Elsevier B.V. All rights reserved.
Statistical optimization for geometric fitting: Theoretical accuracy analysis and high order error analysis
 Int. J. Comput. Vis
, 2008
"... A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and hence a different mathematical framework is necessary in such a domain. After general theories on estimation and accuracy are given, typical existing techniques are selected, and their accuracy is evaluated up to higher order terms. This leads to a “hyperaccurate ” method that outperforms existing methods. 1.
High accuracy fundamental matrix computation and its performance evaluation
 Proc. 17th British Machine Vision Conf (BMVC 2006), vol.1
, 2006
"... We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormal ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
(Show Context)
We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormalization, to which we add GaussNewton iterations. For initial values, we test random choice, least squares, and Taubin’s method. Experiments using simulated and real images reveal different characteristics of each method. Overall, FNS exhibits the best convergence performance. 1
Automatic Detection Of Circular Objects By Ellipse Growing
 International Journal of Image and Graphics
, 2004
"... We present a new method for the automatic detection of circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space for the Hough transform ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
We present a new method for the automatic detection of circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space for the Hough transform is restricted to one and two dimensions for efficiency, and special weighting schemes are introduced to enhance the accuracy. We demonstrate the effectiveness of our method using real images. Finally, we apply our method to the calibration of a turntable for 3D object shape reconstruction.
High accuracy computation of rankconstrained fundamental matrix by efficient search
 Proc. 10th Meeting Image Recog. Understand. (MIRU2007
, 2007
"... A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction. The accuracy achieves the theoretical bound (the KCR lower bound). 1