Results 1  10
of
45
Overall view regarding fundamental matrix estimation
 Image and Vision Computing
, 2003
"... Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article is a fresh look in the subject that overview classic and latest presented methods of fundamental matrix estimation which have been classified into linear methods, ite ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article is a fresh look in the subject that overview classic and latest presented methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analyzed in synthetic and real images. A summary including experimental results and algorithmic details is given and the whole code is available in Internet.
Estimation of nonlinear errorsinvariables models for computer vision applications
 IEEE Trans. Patt. Anal. Mach. Intell
, 2006
"... Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer visi ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errorsinvariables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the LevenbergMarquardt method, the standard approach toward estimating nonlinear models. Index Terms—Nonlinear least squares, heteroscedastic regression, camera calibration, 3D rigid motion, uncalibrated vision. 1 MODELING COMPUTER VISION PROBLEMS SOLVING most computer vision problems requires the estimation of a set of parameters from noisy measurements using a statistical model. A statistical model provides a mathematical description of a problem in terms of a constraint equation relating the measurements to the
From FNS to HEIV: A Link between Two Vision Parameter Estimation Methods
 IEEE Trans. Pattern Anal. Mach. Intell
, 2004
"... Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essenti ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essentially equivalent, solving a common underlying equation via different means. The analysis is driven by the search for a nondegenerate form of a certain generalized eigenvalue problem, and effectively leads to a new derivation of the relevant case of the HEIV algorithm. This work may be seen as an extension of previous efforts to rationalize and interrelate a spectrum of estimators, including the renormalization method of Kanatani and the normalized eightpoint method of Hartley. Index Terms — Statistical methods, maximum likelihood, (un)constrained minimization, fundamental matrix, epipolar equation I.
Statistical optimization for geometric fitting: Theoretical accuracy analysis and high order error analysis
 Int. J. Comput. Vis
, 2008
"... A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and hence a different mathematical framework is necessary in such a domain. After general theories on estimation and accuracy are given, typical existing techniques are selected, and their accuracy is evaluated up to higher order terms. This leads to a “hyperaccurate ” method that outperforms existing methods. 1.
A New Constrained Parameter Estimator For Computer Vision Applications
"... Previous work of the authors developed a theoretically wellfounded scheme (FNS) for finding the minimiser of a class of cost functions. Various problems in video analysis, stereo vision, ellipsefitting, etc, may be expressed in terms of finding such a minimiser. However, in common with many other ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Previous work of the authors developed a theoretically wellfounded scheme (FNS) for finding the minimiser of a class of cost functions. Various problems in video analysis, stereo vision, ellipsefitting, etc, may be expressed in terms of finding such a minimiser. However, in common with many other approaches, it is necessary to correct the minimiser as a postprocess if an ancillary constraint is also to be satisfied. In this paper we develop the first integrated scheme (CFNS) for simultaneously minimising the cost function and satisfying the constraint. Preliminary experiments in the domain of fundamentalmatrix estimation show that CFNS generates rank2 estimates with smaller cost function values than rank2 corrected FNS estimates. Furthermore, when compared with the HartleyZisserman Gold Standard method, CFNS is seen to generate results of comparable quality in a fraction of the time.
High accuracy fundamental matrix computation and its performance evaluation
 Proc. 17th British Machine Vision Conf (BMVC 2006), vol.1
, 2006
"... We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormal ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormalization, to which we add GaussNewton iterations. For initial values, we test random choice, least squares, and Taubin’s method. Experiments using simulated and real images reveal different characteristics of each method. Overall, FNS exhibits the best convergence performance. 1
What value covariance information in estimating vision parameters
 In Eighth IEEE International Conference on Computer Vision (ICCV 2001
, 2001
"... Many parameter estimation methods used in computer vision are able to utilise covariance information describing the uncertainty of data measurements. This paper considers the value of this information to the estimation process when applied to measured image point locations. Covariance matrices are f ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Many parameter estimation methods used in computer vision are able to utilise covariance information describing the uncertainty of data measurements. This paper considers the value of this information to the estimation process when applied to measured image point locations. Covariance matrices are first described and a procedure is then outlined whereby covariances may be associated with image features located via a measurement process. An empirical study is made of the conditions under which covariance information enables generation of improved parameter estimates. Also explored is the extent to which the noise should be anisotropic and inhomogeneous if improvements are to be obtained over covariancefree methods. Critical in this is the devising of synthetic experiments under which noise conditions can be precisely controlled. Given that covariance information is, in itself, subject to estimation error, tests are also undertaken to determine the impact of imprecise covariance information upon the quality of parameter estimates. Finally, an experiment is carried out to assess the value of covariances in estimating the fundamental matrix from real images. 1.
Automatic detection of circular objects by ellipse growing
 Int. J. Image Graphics
"... We present a new method for automatically detecting circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space for the Hough transform is ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We present a new method for automatically detecting circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space for the Hough transform is restricted to one and two dimensions for efficiency, and special weighting schemes are introduced to enhance the accuracy. We demonstrate the effectiveness of our method using real images. Finally, we apply our method to the calibration of a turntable for 3D object shape reconstruction. 1
Revisiting Hartley's Normalized EightPoint Algorithm
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract — Hartley’s eightpoint algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the e ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Abstract — Hartley’s eightpoint algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eightpoint algorithm that results from using normalized data. It is first established that the normalized algorithm acts to minimize a specific cost function. It is then shown that this cost function is statistically better founded than the cost function associated with the nonnormalized algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework. Index Terms — Epipolar equation, fundamental matrix, eightpoint algorithm, data normalization
High accuracy computation of rankconstrained fundamental matrix by efficient search
 Proc. 10th Meeting Image Recog. Understand. (MIRU2007
, 2007
"... A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction. The accuracy achieves the theoretical bound (the KCR lower bound). 1