Results 1 
4 of
4
Tree Data Structures for NBody Simulation
 In Proc. 37th Ann. Symp. Foundations of Comp. Sci
, 1997
"... In this paper, we study data structures for use in Nbody simulation. We concentrate on the spatial decomposition tree used in particlecluster force evaluation algorithms such as the BarnesHut algorithm. We prove that a kd tree is asymptotically inferior to a spatially balanced tree. We show that ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In this paper, we study data structures for use in Nbody simulation. We concentrate on the spatial decomposition tree used in particlecluster force evaluation algorithms such as the BarnesHut algorithm. We prove that a kd tree is asymptotically inferior to a spatially balanced tree. We show that the worst case complexity of the force evaluation algorithm using a kd tree is \Theta(n log 3 n log L) compared with \Theta(n log L) for an octtree. (L is the separation ratio of the set of points.) We also investigate improving the constant factor of the algorithm, and present several methods which improve over the standard octtree decomposition. Finally, we consider whether or not the bounding box of a point set should be "tight", and show that it is only safe to use tight bounding boxes for binary decompositions. The results are all directly applicable to practical implementations of Nbody algorithms. 1 Introduction The gravitational force computation problem is: given a set of n ...
Approximate Complex Polynomial Evaluation In Near Constant Work Per Point
, 1999
"... . Given the n complex coe#cients of a degree n  1 complex polynomial, we wish to evaluate the polynomial at a large number m # n of points on the complex plane. This problem is required by many algebraic computations and so is considered in most basic algorithm texts (e.g., [A. V. Aho, J. E. Ho ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
. Given the n complex coe#cients of a degree n  1 complex polynomial, we wish to evaluate the polynomial at a large number m # n of points on the complex plane. This problem is required by many algebraic computations and so is considered in most basic algorithm texts (e.g., [A. V. Aho, J. E. Hopcroft, and J. D. Ullman, The Design and Analysis of Computer Algorithms, AddisonWesley, 1974]). We assume an arithmetic model of computation, where on each step we can execute an arithmetic operation, which is computed exactly. All previous exact algorithms [C. M. Fiduccia, Proceedings 4th Annual ACM Symposium on Theory of Computing, 1972, pp. 8893; H. T. Kung, Fast Evaluation and Interpolation, CarnegieMellon, 1973; A. B. Borodin and I. Munro, The Computational Complexity of Algebraic and Numerical Problems, American Elsevier, 1975; V. Pan, A. Sadikou, E. Landowne, and O. Tiga, Comput. Math. Appl., 25 (1993), pp. 2530] cost at least work ## log 2 n) per point, and previously, the...
DOI: 10.1007/s004530010040
 Algorithmica: An International Journal in Computer Science
, 1996
"... In this paper we show that if the input points to the geometric closest pair problem are given with limited precision (each coordinate is specified with O(log n) bits), then we can compute the closest pair in O(n log log n) time. We also apply our spatial decomposition technique to the knearest nei ..."
Abstract
 Add to MetaCart
In this paper we show that if the input points to the geometric closest pair problem are given with limited precision (each coordinate is specified with O(log n) bits), then we can compute the closest pair in O(n log log n) time. We also apply our spatial decomposition technique to the knearest neighbor and nbody problems, achieving similar improvements.
unknown title
"... shows that there exists a randomized algebraic algorithm that has worstcase expected complexity O(n) [8], thus beating the best possible deterministic algebraic algorithm. Randomization allows operations such as hashing to be performed in constant expected time, and in fact this is a key element of ..."
Abstract
 Add to MetaCart
shows that there exists a randomized algebraic algorithm that has worstcase expected complexity O(n) [8], thus beating the best possible deterministic algebraic algorithm. Randomization allows operations such as hashing to be performed in constant expected time, and in fact this is a key element of Rabin’s algorithm. To demonstrate this dependence, Fortune and Hopcroft gave an O(n log log n) time deterministic algorithm by augmenting their model with an operation that is essentially hashing in constant time [3]. In this paper we consider only deterministic algorithms, but assume that input points are represented as fixed point binary values with O(log n) bits. In addition, we augment our model with the floor function; equivalently, we could allow constant time binary shift and mask operations. Both of these assumptions seem reasonable given today’s computing hardware, and in fact seem more realistic than the algebraic model’s assumption that arbitrary precision real numbers can be stored and manipulated. Under such a model, we show that the simple closest pair and knearest neighbors problems can be solved in O(n log log n) time. This is currently the only algorithm to beat the �(n log n) bound