## The modified pbM-estimator method and a runtime analysis technique for the ransac family (2005)

Venue: | in Proc. IEEE Conf. on Computer Vision and Pattern Recognition |

Citations: | 9 - 2 self |

### BibTeX

@INPROCEEDINGS{Rozenfeld05themodified,

author = {Stas Rozenfeld and Ilan Shimshoni},

title = {The modified pbM-estimator method and a runtime analysis technique for the ransac family},

booktitle = {in Proc. IEEE Conf. on Computer Vision and Pattern Recognition},

year = {2005},

pages = {1113--1120}

}

### Years of Citing Articles

### OpenURL

### Abstract

Robust regression techniques are used today in many computer vision algorithms. Chen and Meer recently presented a new robust regression technique named the projection based M-estimator. Unlike other methods in the RANSAC family of techniques, where performance depends on a user supplied scale parameter, in the pbM-estimator technique this scale parameter is estimated automatically from the data using kernel smoothing density estimation. In this work we improve the performance of the pbM-estimator by changing its cost function. Replacing the cost function of the pbM-estimator with the changed one yields the modified pbM-estimator. The cost function of the modified pbMestimator is more stable relative to the scale parameter and is also a better classifier. Thus we get a more robust and effective technique. A new general method to estimate the runtime of robust regression algorithms is proposed. Using it we show, that the modified pbM-estimator runs 2-3 times faster than the pbM-estimator. Experimental results of fundamental matrix estimation are presented demonstrating the correctness of the proposed analysis method and the advantages of the modified pbM-estimator. 1

### Citations

2740 |
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography
- Fischler, Bolles
- 1981
(Show Context)
Citation Context ...nd within the provided distance(scale) from the recovered model, in the others(MLESAC, etc.) we maximize a likelihood function. The robust regression techniques most popular in computer vision RANSAC =-=[4]-=-, MSAC [12] and MLESAC [11] use parameters provided by the user to solve the problem. The scale parameter is used in the cost function and the proportion of inliers is used to define the stopping crit... |

1336 |
Robust Regression and Outlier Detection
- Rousseeuw, Leroy
- 1987
(Show Context)
Citation Context ...ate [1]. The quality of the candidate is then evaluated using all the data points and the final estimate is found by taking the model which obtained the maximal score over a number of such candidates =-=[9]-=-. In real applications however, the outliers dominate the data and this number becomes unfeasibly large. A possible solution is guided sampling, in which additional information about the probability o... |

474 |
Kernel Smoothing
- Wand, Jones
- 1995
(Show Context)
Citation Context ...e values are a sample of some unknown distribution. The density of this distribution dcorr(x) can be estimated, using the kernel smoothing technique with the bandwidth defined by (7) (for details see =-=[13]-=-). Using the kernel smoothing technique we define dcorr(x) by its values for the set {xc 0, . . . , xc }, where Ic xc i − xci−1 = δxc , xc 0 = 0, dcorr(x) negligible for x > xc Ic and δxc is small. Ty... |

268 | MLESAC: A new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding 78
- Torr, Zisserman
- 2000
(Show Context)
Citation Context ...the provided distance(scale) from the recovered model, in the others(MLESAC, etc.) we maximize a likelihood function. The robust regression techniques most popular in computer vision RANSAC [4], MSAC =-=[12]-=- and MLESAC [11] use parameters provided by the user to solve the problem. The scale parameter is used in the cost function and the proportion of inliers is used to define the stopping criterion. ∗ Th... |

230 | The Development and Comparison of Robust Methods for Estimating the Fundamental Matrix. IJCV
- Torr, Murray
- 1997
(Show Context)
Citation Context ...tance(scale) from the recovered model, in the others(MLESAC, etc.) we maximize a likelihood function. The robust regression techniques most popular in computer vision RANSAC [4], MSAC [12] and MLESAC =-=[11]-=- use parameters provided by the user to solve the problem. The scale parameter is used in the cost function and the proportion of inliers is used to define the stopping criterion. ∗ This work was supp... |

126 |
defense of the eight-point algorithm
- In
- 1997
(Show Context)
Citation Context ...s, each point is replaced with the closest point, that lies on the found hyperplane. Note, that sometimes the values of the data points yi are functions of measurements (fundamental matrix estimation =-=[5]-=-, ellipse fitting [6], etc.). In these cases the linear EIV model is only an estimate of the correct model. All the major robust regression techniques used in computer vision can be expressed as M-est... |

84 | Heteroscedastic regression in computer vision: Problems with bilinear constraint
- Leedan, Meer
(Show Context)
Citation Context ...aced with the closest point, that lies on the found hyperplane. Note, that sometimes the values of the data points yi are functions of measurements (fundamental matrix estimation [5], ellipse fitting =-=[6]-=-, etc.). In these cases the linear EIV model is only an estimate of the correct model. All the major robust regression techniques used in computer vision can be expressed as M-estimators [7],[2], and ... |

70 |
Locally optimized ransac
- CHUM, MATAS, et al.
- 2003
(Show Context)
Citation Context ...in which additional information about the probability of a data point being an inlier is integrated into the sampling process [10]. There are additional techniques that speed up the computations [8], =-=[3]-=-. The idea of these techniques is simple. There are two tests for a model parameter candidate. The computationally cheap one and the computationally expensive one. If the candidate passes the first te... |

57 | Guided sampling and consensus for motion estimation
- Tordoff, Murray
- 2002
(Show Context)
Citation Context ... number becomes unfeasibly large. A possible solution is guided sampling, in which additional information about the probability of a data point being an inlier is integrated into the sampling process =-=[10]-=-. There are additional techniques that speed up the computations [8], [3]. The idea of these techniques is simple. There are two tests for a model parameter candidate. The computationally cheap one an... |

49 |
Randomized RANSAC with Td,d test
- Matas, Chum
- 2004
(Show Context)
Citation Context ...ing, in which additional information about the probability of a data point being an inlier is integrated into the sampling process [10]. There are additional techniques that speed up the computations =-=[8]-=-, [3]. The idea of these techniques is simple. There are two tests for a model parameter candidate. The computationally cheap one and the computationally expensive one. If the candidate passes the fir... |

40 |
Robust regression
- Li
- 1985
(Show Context)
Citation Context ...e fitting [6], etc.). In these cases the linear EIV model is only an estimate of the correct model. All the major robust regression techniques used in computer vision can be expressed as M-estimators =-=[7]-=-,[2], and then [1] reformulated in the following way: [ˆα, ˆ θ] = arg max α,θ 1 n n� i=1 k( yT i θ − α ), (2) s 2 where k is the M-kernel function. � k(u)= c(1 − u2 ) 3 0 , if |u| ≤ 1 , if |u| ≥ 1 is ... |

34 | Robust Regression with Projection Based M-estimators. ICCV
- Chen, Meer
- 2003
(Show Context)
Citation Context ...the optimization criterion of the robust regression. Each time we sample an elemental subset - subset that contain the smallest number of data points which uniquely define a model parameter candidate =-=[1]-=-. The quality of the candidate is then evaluated using all the data points and the final estimate is found by taking the model which obtained the maximal score over a number of such candidates [9]. In... |

21 | Robust regression for data with multiple structures - Chen, Meer, et al. - 2001 |