Results 1  10
of
12
Geometric structures for threedimensional shape representation
 ACM Trans. Graph
, 1984
"... Different geometric structures are investigated in the context of discrete surface representation. It is shown that minimal representations (i.e., polyhedra) can be provided by a surfacebased method using nearest neighbors structures or by a volumebased method using the Delaunay triangulation. Bot ..."
Abstract

Cited by 166 (3 self)
 Add to MetaCart
Different geometric structures are investigated in the context of discrete surface representation. It is shown that minimal representations (i.e., polyhedra) can be provided by a surfacebased method using nearest neighbors structures or by a volumebased method using the Delaunay triangulation. Both approaches are compared with respect to various criteria, such as space requirements, computation time, constraints on the distribution of the points, facilities for further calculations, and agreement with the actual shape of the object.
RightTriangulated Irregular Networks
 Algorithmica
, 2001
"... We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying twodimensional space using rightangled triangles. The methods we ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying twodimensional space using rightangled triangles. The methods we discuss permit a single approximation to have a varying level of approximation accuracy across the surface. Thus, for example, the area close to an observer may be represented with greater detail than areas which lie outside their field of view.
Right Triangular Irregular Networks
 Algorithmica
, 1997
"... We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying twodimensional space using rightangled triangles. The methods we ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying twodimensional space using rightangled triangles. The methods we discuss allow the approximation to precisely represent the surface in certain areas while coarsely approximating the surface in others. Thus, for example, the area close to an observer may be represented with greater detail than areas which lie outside their field of view. We discuss the application of this hierarchical data structure to the problem of interactive terrain visualization. We point out some of the advantages of this method in terms of memory usage and speed. 1 Introduction In this paper, we describe a method for approximating a surface which is presented to us as a twodimensional array of height values. We assume that the height value in location i; j of the array is the true...
Practical InPlace Mergesort
, 1996
"... Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and " ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and "N log 2 N moves, for any fixed " ? 0 and any N ? N ("). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest inplace mergesort behaves still about 50 per cent slower than the bottomup heapsort. However, our implementations are practical compared to mergesort algorithms based on inplace merging. Key words: sorting, mergesort, inplace algorithms CR Classification: F.2.2 1.
An InPlace Sorting with O(n log n) Comparisons and O(n) Moves
 In Proc. 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374–93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.
Amortization Results for Chromatic Search Trees, with an Application to Priority Queues
, 1997
"... this paper, we prove that only an amortized constant amount of rebalancing is necessary after an update in a chromatic search tree. We also prove that the amount of rebalancing done at any particular level decreases exponentially, going from the leaves toward the root. These results imply that, in p ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
this paper, we prove that only an amortized constant amount of rebalancing is necessary after an update in a chromatic search tree. We also prove that the amount of rebalancing done at any particular level decreases exponentially, going from the leaves toward the root. These results imply that, in principle, a linear number of processes can access the tree simultaneously. We have included one interesting application of chromatic trees. Based on these trees, a priority queue with possibilities for a greater degree of parallelism than previous proposals can be implemented. ] 1997 Academic Press 1.
Y = 2x Vs. Y = 3x
 In Proc. 8th IEEE Symp. on Logic in Computer Science
, 1994
"... We show that no formula of first order logic using linear ordering and the logical relation y = 2x can define the property that the size of a finite model is divisible by 3. This answers a longstanding question which may be of relevance to certain open problems in circuit complexity. Introduction ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We show that no formula of first order logic using linear ordering and the logical relation y = 2x can define the property that the size of a finite model is divisible by 3. This answers a longstanding question which may be of relevance to certain open problems in circuit complexity. Introduction Descriptive complexity theory originated with a fundamental result of Fagin [8] which characterized queries computable in nondeterministic polynomial time as classes of models of existential sentences of second order logic. Subsequently, the basic complexity classes L, NL, P, and PSPACE, were also tied to logical languages, particularly with certain extensions of first order logic by various kinds of inductive definitions (see Immerman [13] for a survey). Pure first order logic per se appeared more recently in the context of low level parallel complexity classes in the paper by Gurevich and Lewis [11]. Immerman [12] characterized the nonuniform complexity class AC 0 by the first order logi...
A Practical Shortest Path Algorithm with Linear Expected Time
 SUBMITTED TO SIAM J. ON COMPUTING
, 2001
"... We present an improvement of the multilevel bucket shortest path algorithm of Denardo and Fox [9] and justify this improvement, both theoretically and experimentally. We prove that if the input arc lengths come from a natural probability distribution, the new algorithm runs in linear average time ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We present an improvement of the multilevel bucket shortest path algorithm of Denardo and Fox [9] and justify this improvement, both theoretically and experimentally. We prove that if the input arc lengths come from a natural probability distribution, the new algorithm runs in linear average time while the original algorithm does not. We also describe an implementation of the new algorithm. Our experimental data suggests that the new algorithm is preferable to the original one in practice. Furthermore, for integral arc lengths that fit into a word of today's computers, the performance is close to that of breadthfirst search, suggesting limitations on further practical improvements.
An Experimental Study of Sorting and Branch Prediction
"... Sorting is one of the most important and well studied problems in Computer Science. Many good algorithms are known which offer various tradeoffs in efficiency, simplicity, memory use, and other factors. However, these algorithms do not take into account features of modern computer architectures tha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Sorting is one of the most important and well studied problems in Computer Science. Many good algorithms are known which offer various tradeoffs in efficiency, simplicity, memory use, and other factors. However, these algorithms do not take into account features of modern computer architectures that significantly influence performance. Caches and branch predictors are two such features, and while there has been a significant amount of research into the cache performance of general purpose sorting algorithms, there has been little research on their branch prediction properties. In this paper we empirically examine the behaviour of the branches in all the most common sorting algorithms. We also consider the interaction of cache optimization on the predictability of the branches in these algorithms. We find insertion sort to have the fewest branch mispredictions of any comparisonbased sorting algorithm, that bubble and shaker sort operate in a fashion which makes their branches highly unpredictable, that the unpredictability of shellsort’s branches improves its caching behaviour and that several cache optimizations have little effect on mergesort’s branch mispredictions. We find also that optimizations to quicksort – for example the choice of pivot – have a strong influence on the predictability of its branches. We point out a simple way of removing branch instructions from a classic heapsort implementation, and show also that unrolling a loop in a cache optimized heapsort implementation improves the predicitability of its branches. Finally, we note that when sorting random data twolevel adaptive branch predictors are usually no better than simpler bimodal predictors. This is despite the fact that twolevel adaptive predictors are almost always superior to bimodal predictors in general.
Priority Queues and Dijkstra’s Algorithm ∗
, 2007
"... We study the impact of using different priority queues in the performance of Dijkstra’s SSSP algorithm. We consider only general priority queues that can handle any type of keys (integer, floating point, etc.); the only exception is that we use as a benchmark the DIMACS Challenge SSSP code [1] which ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We study the impact of using different priority queues in the performance of Dijkstra’s SSSP algorithm. We consider only general priority queues that can handle any type of keys (integer, floating point, etc.); the only exception is that we use as a benchmark the DIMACS Challenge SSSP code [1] which can handle only integer values for distances. Our experiments were focussed on the following: 1. We study the performance of two variants of Dijkstra’s algorithm: the wellknown version that uses a priority queue that supports the DecreaseKey operation, and another that uses a basic priority queue that supports only Insert and DeleteMin. For the latter type of priority queue we include several for which highperformance code is available such as bottomup binary heap, aligned 4ary heap, and sequence heap [33]. 2. We study the performance of Dijkstra’s algorithm designed for flat memory relative to versions that try to be cacheefficient. For this, in main part, we study the difference in performance of Dijkstra’s algorithm relative to the cacheefficiency of the priority queue used, both incore and outofcore. We also study the performance of an implementation