## Efficient Randomized Algorithms for Robust Estimation of Circular Arcs and Aligned Ellipses (2001)

### Cached

### Download Links

- [www.cs.umd.edu]
- [www.cs.umd.edu]
- [www.cs.umd.edu]
- [www.cs.umd.edu]
- [www.cs.umd.edu]
- [www.cs.umd.edu]
- [ftp.cs.umd.edu]
- DBLP

### Other Repositories/Bibliography

Citations: | 2 - 1 self |

### BibTeX

@MISC{Mount01efficientrandomized,

author = {David M. Mount and Nathan S. Netanyahu},

title = {Efficient Randomized Algorithms for Robust Estimation of Circular Arcs and Aligned Ellipses},

year = {2001}

}

### OpenURL

### Abstract

Fitting two-dimensional conic sections (e.g., circular and elliptical arcs) to a finite collection of points in the plane is an important problem in statistical estimation and has significant industrial applications. Recently there has been a great deal of interest in robust estimators, because of their lack of sensitivity to outlying data points.

### Citations

9069 | Introduction to Algorithms
- Cormen, Leiserson, et al.
- 1991
(Show Context)
Citation Context ...of respective ranks k lo = max / 1; $ mk n \Gamma 3 p m 2 %! ; k hi = min / m; & mk n + 3 p m 2 '! : This can be done in O(m) time using any fast (possibly randomized) selection algorithm (see, e.g., =-=[20, 19, 11]-=-). The kth smallest element is less than x lo if and only if fewer than k lo sampled elements are less than the kth smallest element. Since the probability that a given element is less than or equal t... |

2116 |
Robust Statistics
- Huber
- 1981
(Show Context)
Citation Context ...a small number of outlying points can perturb the function of fit by an arbitrarily large amount. For this reason, there has been a growing interest in a class of estimators, called robust estimatorss=-=[25, 22, 38]-=-, which do not suffer from this deficiency. Define the breakdown point of an estimator to be the fraction of outlying data points (up to 50%) that may cause the estimator to take on an arbitrarily lar... |

1712 |
An Introduction to Probability Theory and
- Feller
- 1970
(Show Context)
Citation Context ... x k is essentially the probability that this random variable is at least three standard deviations below its mean value. By applying Chernoff's bounds (see, e.g., [9, 17]) and Chebyshev's inequality =-=[18]-=-, it follows that this probability is 1=\Omega\Gamma p m). See Lemmas 3.1 and 3.2 in [12] for complete details. A similar argument applies for k hi . The probability that the kth smallest element does... |

1309 |
Robust Regression and Outlier Detection
- Rousseeuw, Leroy
- 1987
(Show Context)
Citation Context ...a small number of outlying points can perturb the function of fit by an arbitrarily large amount. For this reason, there has been a growing interest in a class of estimators, called robust estimatorss=-=[25, 22, 38]-=-, which do not suffer from this deficiency. Define the breakdown point of an estimator to be the fraction of outlying data points (up to 50%) that may cause the estimator to take on an arbitrarily lar... |

769 |
A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, Ann
- Chernoff
- 1952
(Show Context)
Citation Context ... k lo sampled elements are less than x k is essentially the probability that this random variable is at least three standard deviations below its mean value. By applying Chernoff's bounds (see, e.g., =-=[9, 17]-=-) and Chebyshev's inequality [18], it follows that this probability is 1=\Omega\Gamma p m). See Lemmas 3.1 and 3.2 in [12] for complete details. A similar argument applies for k hi . The probability t... |

697 |
Robust Statistics: The Approach Based on Influence Functions
- Hampel, Ronchetti, et al.
- 1986
(Show Context)
Citation Context ...a small number of outlying points can perturb the function of fit by an arbitrarily large amount. For this reason, there has been a growing interest in a class of estimators, called robust estimatorss=-=[25, 22, 38]-=-, which do not suffer from this deficiency. Define the breakdown point of an estimator to be the fraction of outlying data points (up to 50%) that may cause the estimator to take on an arbitrarily lar... |

677 |
Mathematica: A System for Doing Mathematics by Computer
- Wolfram
- 1988
(Show Context)
Citation Context ...2 ) remaining pairs. We first define A i;j 4 = med k 6=i;j med l6=i;j;k A i;j;k;l : 2 Some of the longer derivations presented here have been verified with the help of the Mathematica software system =-=[50]-=-. Copies of the Mathematica scripts containing these derivations are available from the authors. A i;j can be interpreted as a horizontal radius estimator for two fixed points, p i , p j . (It plays, ... |

607 |
Use of the Hough Transformation to Detect Lines and Curves in Pictures
- Duda, Hart
- 1972
(Show Context)
Citation Context ...regards to noise. Thus, they are likely to be sensitive to outlying data. Amir [2] has introduced an alternative technique, the "cord (sic.)" method, which is presumably more robust. His &qu=-=ot;Hough-like" [24, 14]-=- technique is applicable, primarily, to edge data from an image. In general, however, the method is likely to be sensitive to quantization effects due to discretization of the parameter space. More re... |

417 |
Method and means for recognizing complex patterns
- Hough
- 1962
(Show Context)
Citation Context ...regards to noise. Thus, they are likely to be sensitive to outlying data. Amir [2] has introduced an alternative technique, the "cord (sic.)" method, which is presumably more robust. His &qu=-=ot;Hough-like" [24, 14]-=- technique is applicable, primarily, to edge data from an image. In general, however, the method is likely to be sensitive to quantization effects due to discretization of the parameter space. More re... |

392 | Expected Time Bounds for Selection
- Floyd, Rivest
- 1975
(Show Context)
Citation Context ...of respective ranks k lo = max / 1; $ mk n \Gamma 3 p m 2 %! ; k hi = min / m; & mk n + 3 p m 2 '! : This can be done in O(m) time using any fast (possibly randomized) selection algorithm (see, e.g., =-=[20, 19, 11]-=-). The kth smallest element is less than x lo if and only if fewer than k lo sampled elements are less than the kth smallest element. Since the probability that a given element is less than or equal t... |

260 | ffl-nets and simplex range queries
- Haussler, Welzl
- 1987
(Show Context)
Citation Context ...et. Recall that we need to solve this problem for a given a set of l (n \Gamma 1) fi m bisectors (where the value fi ! 1 was left unspecified). Using recent results from the theory of range searching =-=[49, 16, 23, 8, 30]-=-, we will show that there exists fl ! 1 (depending on the complexity of the range searching algorithms), such that, after O(n log n) preprocessing (which will be common to all of the bisectors), the m... |

236 |
A dichromatic framework for balanced trees
- GUIBAS, SEDGEWICK
- 1978
(Show Context)
Citation Context ...ave counted all of the segment's intersections. The stack can be implemented by a simple modification of virtually any type of a balanced binary search tree, for example, a red-black tree (see, e.g., =-=[21]-=- or [11] (Chapters 14, 15)). The tree is modified for the purposes of counting in the following manner. The segments that are currently on the stack are stored in the leaves of the tree. They are orde... |

137 |
Fitting conic sections to scattered data
- Bookstein
(Show Context)
Citation Context ...that there is a simple method of reducing the problem of fitting algebraic curves to the linear problem of fitting hyperplanes in higher dimensions, through a process called linearization (see, e.g., =-=[3]-=-). For example, fitting a circle of the form (x \Gamma a) 2 + (y \Gamma b) 2 = r 2 to a set of data points (x i ; y i ) in the plane, can be reduced to the problem of fitting a plane in 3-D space. Thi... |

85 |
Estimates of the Regression Coefficient based on Kendall's Tau
- Sen
- 1968
(Show Context)
Citation Context ...ollowing. Theil-Sen estimator: The slope of the line of fit is taken to be the median 1 of the set of i n 2 j slopes that result by passing a line through each pair of distinct points in the data set =-=[47, 40]. (T-=-he intercept is defined analogously, in terms of line intercepts.) In the plane, the Theil-Sen estimator has a breakdown point of �� 29.3%. This problem has been studied under the name of slope-se... |

84 |
Introduction to Algebraic Geometry
- Semple, Roth
- 1985
(Show Context)
Citation Context ...tween E[ae] and either E[ae 0 ] or E[ae 00 ] (in addition to the four common points). However, all of the ellipses in this family are distinct polynomial curves of order 2. Hence by B'ezout's Theorem =-=[39]-=-, there can be no more than four intersection points between any two of them. ut 4.2 The Dual Arrangement of Hyperbolas Recall that in the circular case, we considered a dual transformation in which e... |

80 | On range searching with semialgebraic sets
- Agarwal, Matoušek
- 1994
(Show Context)
Citation Context ...t pass through p i and p j to a range counting query over the set of data points, where the range can be described in terms of a constant number of Boolean operations on circles. Agarwal and Matousek =-=[1]-=- have shown that there exists fl ! 1, such that, these queries can be solved in O(n fl ) time and O(n) space after O(n log n) preprocessing. Because the result of the range query can be identified as ... |

77 | Efficient partition trees
- Matoušek
- 1992
(Show Context)
Citation Context ...et. Recall that we need to solve this problem for a given a set of l (n \Gamma 1) fi m bisectors (where the value fi ! 1 was left unspecified). Using recent results from the theory of range searching =-=[49, 16, 23, 8, 30]-=-, we will show that there exists fl ! 1 (depending on the complexity of the range searching algorithms), such that, after O(n log n) preprocessing (which will be common to all of the bisectors), the m... |

57 |
Proof of a program: Find
- Hoare
- 1971
(Show Context)
Citation Context ...of respective ranks k lo = max / 1; $ mk n \Gamma 3 p m 2 %! ; k hi = min / m; & mk n + 3 p m 2 '! : This can be done in O(m) time using any fast (possibly randomized) selection algorithm (see, e.g., =-=[20, 19, 11]-=-). The kth smallest element is less than x lo if and only if fewer than k lo sampled elements are less than the kth smallest element. Since the probability that a given element is less than or equal t... |

54 |
Quasi-optimal range searching in spaces of finite VC-dimension
- Chazelle, Welzl
- 1989
(Show Context)
Citation Context ...et. Recall that we need to solve this problem for a given a set of l (n \Gamma 1) fi m bisectors (where the value fi ! 1 was left unspecified). Using recent results from the theory of range searching =-=[49, 16, 23, 8, 30]-=-, we will show that there exists fl ! 1 (depending on the complexity of the range searching algorithms), such that, after O(n log n) preprocessing (which will be common to all of the bisectors), the m... |

52 |
An optimal-time algorithm for slope selection
- Cole, Salowe, et al.
- 1989
(Show Context)
Citation Context ...slope of any given rank. There exist asymptotically optimal algorithms for this problem, which run in O(n log n) time and O(n) space. These include algorithms by Cole, Salowe, Steiger, and Szemer'edi =-=[10]-=-, Katz and Sharir [27], and Bronnimann and Chazelle [4]. It should be noted that all of the above algorithms rely on fairly complicated techniques. There are simpler, practical randomized algorithms b... |

49 |
Robust Regression using Repeated Medians
- Siegel
- 1982
(Show Context)
Citation Context ...n averaged over the random choices made in the algorithm, is O(n log n). All the randomized algorithms presented here will be of this same type.) RM estimator: Siegel's repeated median (RM) estimator =-=[42]-=- of a set of n distinct points in the plane fp 1 ; p 2 ; : : : ; p n g is defined as follows. For each point p i , let ` i denote the median of the n \Gamma 1 slopes of the lines passing through p i a... |

46 | The Notion of Breakdown Point in “A Festschrift for Erich Lehmann - Donoho, Huber - 1983 |

45 |
A rank invariant method of linear and polynomial regression analysis
- Theil
- 1950
(Show Context)
Citation Context ...ollowing. Theil-Sen estimator: The slope of the line of fit is taken to be the median 1 of the set of i n 2 j slopes that result by passing a line through each pair of distinct points in the data set =-=[47, 40]. (T-=-he intercept is defined analogously, in terms of line intercepts.) In the plane, the Theil-Sen estimator has a breakdown point of �� 29.3%. This problem has been studied under the name of slope-se... |

42 |
Polygon retrieval
- Willard
- 1982
(Show Context)
Citation Context |

38 | Efficient partition trees - Matouˇsek - 1992 |

30 |
Randomized optimal algorithm for slope selection
- Matousek
- 1991
(Show Context)
Citation Context ...Sharir [27], and Bronnimann and Chazelle [4]. It should be noted that all of the above algorithms rely on fairly complicated techniques. There are simpler, practical randomized algorithms by Matousek =-=[29]-=- and Dillencourt, Mount, and Netanyahu [12], and Shafer and Steiger [41]. (These are Las Vegas randomized algorithms, meaning that they always produce correct results, and on any input, the expected r... |

29 |
Netanyahu, A randomized algorithm for slope selection
- Dillencourt, Mount, et al.
- 1992
(Show Context)
Citation Context ...]. It should be noted that all of the above algorithms rely on fairly complicated techniques. There are simpler, practical randomized algorithms by Matousek [29] and Dillencourt, Mount, and Netanyahu =-=[12]-=-, and Shafer and Steiger [41]. (These are Las Vegas randomized algorithms, meaning that they always produce correct results, and on any input, the expected running time, when averaged over the random ... |

25 | Optimal Slope Selection via Expanders
- Katz, Sharir
- 1993
(Show Context)
Citation Context ...k. There exist asymptotically optimal algorithms for this problem, which run in O(n log n) time and O(n) space. These include algorithms by Cole, Salowe, Steiger, and Szemer'edi [10], Katz and Sharir =-=[27]-=-, and Bronnimann and Chazelle [4]. It should be noted that all of the above algorithms rely on fairly complicated techniques. There are simpler, practical randomized algorithms by Matousek [29] and Di... |

25 |
A simple approach for the estimation of circular arc center and its radius,” Comput
- Thomas, Chan
- 1989
(Show Context)
Citation Context ...e plane has been studied extensively in the fields of pattern recognition and computer vison. Several representative examples include the papers by Landau [28], Takiyama and Ono [46], Thomas and Chan =-=[48]-=-, Chaudhuri [6], Chaudhuri and Kundu [7], Joseph [26], Yi et al. [53], Wu et al. [51], and Yuen and Feng [52]. Unfortunately, most of these methods --- even if posed in (geometric) terms that lead to ... |

22 | Optimal slope selection via cuttings
- Brönnimann, Chazelle
- 1998
(Show Context)
Citation Context ...mal algorithms for this problem, which run in O(n log n) time and O(n) space. These include algorithms by Cole, Salowe, Steiger, and Szemer'edi [10], Katz and Sharir [27], and Bronnimann and Chazelle =-=[4]-=-. It should be noted that all of the above algorithms rely on fairly complicated techniques. There are simpler, practical randomized algorithms by Matousek [29] and Dillencourt, Mount, and Netanyahu [... |

22 |
Halfplanar range search in linear space and O(n ’695 query time
- EDELSBRUNNER, WELZL
- 1986
(Show Context)
Citation Context |

18 |
Estimation of a circular arc center and its radius
- Landau
- 1987
(Show Context)
Citation Context ... circular arcs to a given set of points in the plane has been studied extensively in the fields of pattern recognition and computer vison. Several representative examples include the papers by Landau =-=[28]-=-, Takiyama and Ono [46], Thomas and Chan [48], Chaudhuri [6], Chaudhuri and Kundu [7], Joseph [26], Yi et al. [53], Wu et al. [51], and Yuen and Feng [52]. Unfortunately, most of these methods --- eve... |

18 | Randomized optimal algorithm for slope selection - Matouˇsek - 1991 |

17 | Unbiased estimation of ellipses by bootstrapping
- Cabrera, Meer
- 1996
(Show Context)
Citation Context ...e same as those that would result from the above definitions. In fact, it was noted by many that estimators based on linearization often yield biased results. See, e.g., Joesph [26], Cabrera and Meer =-=[5]-=-, and Netanyahu et al. [35]. Also, Rosin [36] observed empirically that applying linearization to a robust ellipse estimator of a similar type to the ones considered in this paper leads to innacurate ... |

17 | A practical approximation algorithm for the lms line estimator
- Mount, Netanyahu, et al.
- 1997
(Show Context)
Citation Context ...quared residuals. LMS has a breakdown point of 50%. The best algorithms known for LMS, due to Souvaine and Steele [44] and Edelsbrunner and Souvaine [15], run in O(n 2 ) time. (Recently, Mount et al. =-=[34]-=- have presented a Las Vegas approximation algorithm that runs in O(n log n) time.) Mount and Netanyahu [33] showed that it is possible to extend the algorithmic results for computing Theil-Sen and RM ... |

16 |
Ellipse fitting by accumulating five-point fits
- Rosin
- 1993
(Show Context)
Citation Context ...bove definitions. In fact, it was noted by many that estimators based on linearization often yield biased results. See, e.g., Joesph [26], Cabrera and Meer [5], and Netanyahu et al. [35]. Also, Rosin =-=[36]-=- observed empirically that applying linearization to a robust ellipse estimator of a similar type to the ones considered in this paper leads to innacurate results. Thus, it seems desirable that fittin... |

16 |
Least median-of-squares regression
- Rousseeuw
- 1984
(Show Context)
Citation Context ...log n) expected time. 1 For the purposes of this paper we define the median of an m element multiset to be an element of rank dm=2e. LMS estimator: Rousseeuw's least median of squares (LMS) estimator =-=[37]-=- is defined to be the line that minimizes the median of the squared residuals. LMS has a breakdown point of 50%. The best algorithms known for LMS, due to Souvaine and Steele [44] and Edelsbrunner and... |

13 |
Robust statistics in shape fitting
- Stein, Werman
- 1992
(Show Context)
Citation Context ...d RM circular arc estimators (versus a least squares fit) obtained for a data set having 20% outliers. Figure 1(b) shows the same estimators obtained with 40% outlying data. Recently Stein and Werman =-=[45]-=- have independently introduced similar robust estimators for fitting general 2-D conic sections. Their estimators have the nice property of being rotationally equivariant, meaning that rotating the po... |

12 |
Computing median-of-squares regression lines and guided topological sweep
- Edelsbrunner, Souvaine
- 1990
(Show Context)
Citation Context ...d to be the line that minimizes the median of the squared residuals. LMS has a breakdown point of 50%. The best algorithms known for LMS, due to Souvaine and Steele [44] and Edelsbrunner and Souvaine =-=[15]-=-, run in O(n 2 ) time. (Recently, Mount et al. [34] have presented a Las Vegas approximation algorithm that runs in O(n log n) time.) Mount and Netanyahu [33] showed that it is possible to extend the ... |

12 |
Efficient randomized algorithms for the repeated median line estimator
- Matouek, Mount, et al.
- 1993
(Show Context)
Citation Context ...RMintercept is defined analogously, in terms of line intercepts. The RM estimator has a breakdown point of 50%, and the best known algorithm for its computation, due to Matousek, Mount, and Netanyahu =-=[31]-=-, is randomized and runs in O(n log n) expected time. 1 For the purposes of this paper we define the median of an m element multiset to be an element of rank dm=2e. LMS estimator: Rousseeuw's least me... |

12 |
Randomized optimal geometric algorithms
- Shafer, Steiger
- 1993
(Show Context)
Citation Context ...l of the above algorithms rely on fairly complicated techniques. There are simpler, practical randomized algorithms by Matousek [29] and Dillencourt, Mount, and Netanyahu [12], and Shafer and Steiger =-=[41]-=-. (These are Las Vegas randomized algorithms, meaning that they always produce correct results, and on any input, the expected running time, when averaged over the random choices made in the algorithm... |

7 |
Unbiased Least-Squares Fitting Of Circular Arcs, Graph
- Joseph
- 1994
(Show Context)
Citation Context ...(b) 40% outliers. be the same as those that would result from the above definitions. In fact, it was noted by many that estimators based on linearization often yield biased results. See, e.g., Joesph =-=[26]-=-, Cabrera and Meer [5], and Netanyahu et al. [35]. Also, Rosin [36] observed empirically that applying linearization to a robust ellipse estimator of a similar type to the ones considered in this pape... |

5 |
Error propagation
- Yi, Haralick, et al.
- 1993
(Show Context)
Citation Context ...tion and computer vison. Several representative examples include the papers by Landau [28], Takiyama and Ono [46], Thomas and Chan [48], Chaudhuri [6], Chaudhuri and Kundu [7], Joseph [26], Yi et al. =-=[53]-=-, Wu et al. [51], and Yuen and Feng [52]. Unfortunately, most of these methods --- even if posed in (geometric) terms that lead to less biased results --- are either based on a least squares approach ... |

4 |
Algorithms for finding the center of circular fiducials
- Amir
- 1990
(Show Context)
Citation Context ...ms that lead to less biased results --- are either based on a least squares approach or make simplistic assumptions with regards to noise. Thus, they are likely to be sensitive to outlying data. Amir =-=[2] has introduced an a-=-lternative technique, the "cord (sic.)" method, which is presumably more robust. His "Hough-like" [24, 14] technique is applicable, primarily, to edge data from an image. In genera... |

3 |
Computationally efficient algorithms for high-dimensional robust estimators
- Mount, Netanyahu
- 1994
(Show Context)
Citation Context ...Steele [44] and Edelsbrunner and Souvaine [15], run in O(n 2 ) time. (Recently, Mount et al. [34] have presented a Las Vegas approximation algorithm that runs in O(n log n) time.) Mount and Netanyahu =-=[33]-=- showed that it is possible to extend the algorithmic results for computing Theil-Sen and RM line estimators to higher dimensions. In dimension d, the problem is to fit a (d \Gamma 1)-dimensional hype... |

3 |
Robust detection of road segments in noisy aerial images
- Netanyahu, Philomin, et al.
- 1997
(Show Context)
Citation Context ... result from the above definitions. In fact, it was noted by many that estimators based on linearization often yield biased results. See, e.g., Joesph [26], Cabrera and Meer [5], and Netanyahu et al. =-=[35]-=-. Also, Rosin [36] observed empirically that applying linearization to a robust ellipse estimator of a similar type to the ones considered in this paper leads to innacurate results. Thus, it seems des... |

3 |
Efficient time and space algorithms for least median of squares regression
- Souvaine, Steele
- 1987
(Show Context)
Citation Context ...ares (LMS) estimator [37] is defined to be the line that minimizes the median of the squared residuals. LMS has a breakdown point of 50%. The best algorithms known for LMS, due to Souvaine and Steele =-=[44]-=- and Edelsbrunner and Souvaine [15], run in O(n 2 ) time. (Recently, Mount et al. [34] have presented a Las Vegas approximation algorithm that runs in O(n log n) time.) Mount and Netanyahu [33] showed... |

2 |
Optimum circular fit to weighted data in multidimensional space
- Chaudhuri, Kundu
- 1993
(Show Context)
Citation Context ...he fields of pattern recognition and computer vison. Several representative examples include the papers by Landau [28], Takiyama and Ono [46], Thomas and Chan [48], Chaudhuri [6], Chaudhuri and Kundu =-=[7]-=-, Joseph [26], Yi et al. [53], Wu et al. [51], and Yuen and Feng [52]. Unfortunately, most of these methods --- even if posed in (geometric) terms that lead to less biased results --- are either based... |

2 |
A least square error estimation of the center and radii of concentric arcs
- Takiyama, Ono
- 1989
(Show Context)
Citation Context ...en set of points in the plane has been studied extensively in the fields of pattern recognition and computer vison. Several representative examples include the papers by Landau [28], Takiyama and Ono =-=[46]-=-, Thomas and Chan [48], Chaudhuri [6], Chaudhuri and Kundu [7], Joseph [26], Yi et al. [53], Wu et al. [51], and Yuen and Feng [52]. Unfortunately, most of these methods --- even if posed in (geometri... |

2 |
The Robust Algorithms for Finding the Center of an Arc
- Wu, Wu, et al.
- 1995
(Show Context)
Citation Context ...r vison. Several representative examples include the papers by Landau [28], Takiyama and Ono [46], Thomas and Chan [48], Chaudhuri [6], Chaudhuri and Kundu [7], Joseph [26], Yi et al. [53], Wu et al. =-=[51]-=-, and Yuen and Feng [52]. Unfortunately, most of these methods --- even if posed in (geometric) terms that lead to less biased results --- are either based on a least squares approach or make simplist... |