Results 1  10
of
35
The Quickhull algorithm for convex hulls
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 1996
"... The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the twodimensional Quickhull Algorithm with the generaldimension BeneathBeyond Algorithm. It is similar to the randomized, incremental algo ..."
Abstract

Cited by 612 (0 self)
 Add to MetaCart
(Show Context)
The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the twodimensional Quickhull Algorithm with the generaldimension BeneathBeyond Algorithm. It is similar to the randomized, incremental algorithms for convex hull and Delaunay triangulation. We provide empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it uses less memory. Computational geometry algorithms have traditionally assumed that input sets are well behaved. When an algorithm is implemented with floatingpoint arithmetic, this assumption can lead to serious errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick ” facets that contain all possible exact convex hulls of the input. A variation is effective in five or more dimensions.
Needed: An Empirical Science Of Algorithms
 Operations Research
, 1994
"... this article goes to press. Journal editors can be encouraged to seek out referees who have done rigorous empirical studies. Refereeing standards will evolve, particularly as the empirical science develops. ..."
Abstract

Cited by 80 (3 self)
 Add to MetaCart
this article goes to press. Journal editors can be encouraged to seek out referees who have done rigorous empirical studies. Refereeing standards will evolve, particularly as the empirical science develops.
Bayesian Statistics
 in WWW', Computing Science and Statistics
, 1989
"... ∗ Signatures are on file in the Graduate School. This dissertation presents two topics from opposite disciplines: one is from a parametric realm and the other is based on nonparametric methods. The first topic is a jackknife maximum likelihood approach to statistical model selection and the second o ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
∗ Signatures are on file in the Graduate School. This dissertation presents two topics from opposite disciplines: one is from a parametric realm and the other is based on nonparametric methods. The first topic is a jackknife maximum likelihood approach to statistical model selection and the second one is a convex hull peeling depth approach to nonparametric massive multivariate data analysis. The second topic includes simulations and applications on massive astronomical data. First, we present a model selection criterion, minimizing the KullbackLeibler distance by using the jackknife method. Various model selection methods have been developed to choose a model of minimum KullbackLiebler distance to the true model, such as Akaike information criterion (AIC), Bayesian information criterion (BIC), Minimum description length (MDL), and Bootstrap information criterion. Likewise, the jackknife method chooses a model of minimum KullbackLeibler distance through bias reduction. This bias, which is inevitable in model
Computational geometry  a survey
 IEEE TRANSACTIONS ON COMPUTERS
, 1984
"... We survey the state of the art of computational geometry, a discipline that deals with the complexity of geometric problems within the framework of the analysis ofalgorithms. This newly emerged area of activities has found numerous applications in various other disciplines, such as computeraided de ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We survey the state of the art of computational geometry, a discipline that deals with the complexity of geometric problems within the framework of the analysis ofalgorithms. This newly emerged area of activities has found numerous applications in various other disciplines, such as computeraided design, computer graphics, operations research, pattern recognition, robotics, and statistics. Five major problem areasconvex hulls, intersections, searching, proximity, and combinatorial optimizationsare discussed. Seven algorithmic techniques incremental construction, planesweep, locus, divideandconquer, geometric transformation, pruneandsearch, and dynamizationare each illustrated with an example.Acollection of problem transformations to establish lower bounds for geometric problems in the algebraic computation/decision model is also included.
Spaceefficient planar convex hull algorithms
 Proc. Latin American Theoretical Informatics
, 2002
"... A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set. ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
A spaceefficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four spaceefficient algorithms for computing the convex hull of a planar point set.
More OutputSensitive Geometric Algorithms (Extended Abstract)
 In Proc. 35th Annu. IEEE Sympos. Found. Comput. Sci
, 1994
"... A simple idea for speeding up the computation of extrema of a partially ordered set turns out to have a number of interesting applications in geometric algorithms; the resulting algorithms generally replace an appearance of the input size n in the running time by an output size A n. In particular, ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
A simple idea for speeding up the computation of extrema of a partially ordered set turns out to have a number of interesting applications in geometric algorithms; the resulting algorithms generally replace an appearance of the input size n in the running time by an output size A n. In particular, the A coordinatewise minima of a set of n points in R d can be found by an algorithm needing O(nA) time. Given n points uniformly distributed in the unit square, the algorithm needs n + O(n 5=8 ) point comparisons on average. Given a set of n points in R d , another algorithm can find its A extreme points in O(nA) time. Thinning for nearestneighbor classification can be done in time O(n log n) P i A i n i , finding the A i irredundant points among n i points for each class i, where n = P i n i is the total number of input points. This sharpens a more obvious O(n 3 ) algorithm, which is also given here. Another algorithm is given that needs O(n) space to compute the convex ...
Determining the Convex Hull in Large Multidimensional Databases
, 2001
"... Determining the convex hull of a point set is a basic operation for many applications of pattern recognition, image processing, statistics, and data mining. ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Determining the convex hull of a point set is a basic operation for many applications of pattern recognition, image processing, statistics, and data mining.
A Note on Linear Expected Time Algorithms for Finding Convex Hulls
 Computing
, 1981
"... Consider n independent identically distributed random vectors from R d with common density f , and let E(C) be the average complexity of an algorithm that finds the convex hull of these points. Most wellknown algorithms satisfy E(C) = O(n) for certain classes of densities. In this note, we show t ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Consider n independent identically distributed random vectors from R d with common density f , and let E(C) be the average complexity of an algorithm that finds the convex hull of these points. Most wellknown algorithms satisfy E(C) = O(n) for certain classes of densities. In this note, we show that E(C) = O(n) for algorithms that use a "throwaway" preprocessing step when f is bounded away from 0 and 1 on any nondegenerate rectangle of R 2 . 1 Introduction Let X 1 ; : : : ; X n be independent identically distributed random vectors from R d with common density f , and let C be the complexity of a given convex hull algorithms for X 1 ; : : : ; X n (thus, C is a random variable). In this note we will discuss several convex hull algorithms and the condition on f that will insure their linear average time behavior: E(C) = O(n) (1) In general, the more sophisticated algorithms satisfy (1) for a larger class of densities than do the simple algorithms. The purpose of this note is ...
HOW TO REDUCE THE AVERAGE COMPLEXITY OF CONVEX HULL FINDING ALGORITHMS
, 1981
"... Let X,..,X. be a sequence of independent Rdvalued random vectors with a common density f The following class of convex hull finding algorithms is considered: find the extrema in a finite number of carefully chosen directions; eliminate the Xi’s that belong to the interior of the polyhedron formed b ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Let X,..,X. be a sequence of independent Rdvalued random vectors with a common density f The following class of convex hull finding algorithms is considered: find the extrema in a finite number of carefully chosen directions; eliminate the Xi’s that belong to the interior of the polyhedron formed by these extrema; apply an O(A(n)) worstcase complexity algorithm to find the convex hull of the remaining points. We give weak sufficient conditions that imply that the overall average complexity is O(A(n)). We also show that for the standard normal density, the average complexity is O(n) whenever A(n) = n log n.