Results 1  10
of
13
A Survey of Adaptive Sorting Algorithms
, 1992
"... Introduction and Survey; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems  Sorting and Searching; E.5 [Data]: Files  Sorting/searching; G.3 [Mathematics of Computing]: Probability and Statistics  Probabilistic algorithms; E.2 [Data Storage Represe ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
Introduction and Survey; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems  Sorting and Searching; E.5 [Data]: Files  Sorting/searching; G.3 [Mathematics of Computing]: Probability and Statistics  Probabilistic algorithms; E.2 [Data Storage Representation]: Composite structures, linked representations. General Terms: Algorithms, Theory. Additional Key Words and Phrases: Adaptive sorting algorithms, Comparison trees, Measures of disorder, Nearly sorted sequences, Randomized algorithms. A Survey of Adaptive Sorting Algorithms 2 CONTENTS INTRODUCTION I.1 Optimal adaptivity I.2 Measures of disorder I.3 Organization of the paper 1.WORSTCASE ADAPTIVE (INTERNAL) SORTING ALGORITHMS 1.1 Generic Sort 1.2 CookKim division 1.3 Partition Sort 1.4 Exponential Search 1.5 Adaptive Merging 2.EXPECTEDCASE ADAPTIV
An efficient outputsensitive hiddensurface removal algorithm for polyhedral terrains
, 1994
"... In this paper, we present an algorithm for hidden surface removal for a class of polyhedral surfaces which have a property that they can be ordered relatively quickly. For example, our results apply directly to terrain maps. A distinguishing feature of our algorithm is that its running time is sen ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
In this paper, we present an algorithm for hidden surface removal for a class of polyhedral surfaces which have a property that they can be ordered relatively quickly. For example, our results apply directly to terrain maps. A distinguishing feature of our algorithm is that its running time is sensitive to the actual size of the visible image, rather than the total number of intersections in the image plaue which can be much larger than the visible image. The time complexity of this algorithm is O((k + n) log ’ n) where n and /c are, respectively, the input and the output sizes. Thus, in a significant number of situations this will be faster than the worst case optimal algorithms which have running time of n(n²) irrespective of the output size.
Algorithm Design and Software Libraries: Recent Developments in the LEDA Project
 IN PROC. IFIP 12TH WORLD COMPUTER CONGRESS
, 1992
"... LEDA (Library of Efficient Data Types and Algorithms) is an ongoing project which aims to build a library of the efficient data structures and algorithms used in combinatorial computing [12]. We discuss three recent aspects of the project: The cost of flexibility, implementation parameters, and a ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
LEDA (Library of Efficient Data Types and Algorithms) is an ongoing project which aims to build a library of the efficient data structures and algorithms used in combinatorial computing [12]. We discuss three recent aspects of the project: The cost of flexibility, implementation parameters, and augmented trees.
Perfect hashing for network applications
 in IEEE Symposium on Information Theory
, 2006
"... Abstract — Hash tables are a fundamental data structure in many network applications, including route lookups, packet classification and monitoring. Often a part of the data path, they need to operate at wirespeed. However, several associative memory accesses are needed to resolve collisions, makin ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract — Hash tables are a fundamental data structure in many network applications, including route lookups, packet classification and monitoring. Often a part of the data path, they need to operate at wirespeed. However, several associative memory accesses are needed to resolve collisions, making them slower than required. This motivates us to consider minimal perfect hashing schemes, which reduce the number of memory accesses to just 1 and are also spaceefficient. Existing perfect hashing algorithms are not tailored for network applications because they take too long to construct and are hard to implement in hardware. This paper introduces a hardwarefriendly scheme for minimal perfect hashing, with space requirement approaching 3.7 times the information theoretic lower bound. Our construction is several orders faster than existing perfect hashing schemes. Instead of using the traditional mappingpartitioningsearching methodology, our scheme employs a Bloom filter, which is known for its simplicity and speed. We extend our scheme to the dynamic setting, thus handling insertions and deletions. I.
Sorting inplace with a worst case complexity of n log n  1:3n + O(log n) comparisons and ffln log n +O(1) transports
 LNCS
, 1992
"... First we present a new variant of Mergesort, which needs only 1.25n space, because it uses space again, which becomes available within the current stage. It does not need more comparisons than classical Mergesort. The main result is an easy to implement method of iterating the procedure inplace s ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
First we present a new variant of Mergesort, which needs only 1.25n space, because it uses space again, which becomes available within the current stage. It does not need more comparisons than classical Mergesort. The main result is an easy to implement method of iterating the procedure inplace starting to sort 4/5 of the elements. Hereby we can keep the additional transport costs linear and only very few comparisons get lost, so that n log n − 0.8n comparisons are needed. We show that we can improve the number of comparisons if we sort blocks of constant length with MergeInsertion, before starting the algorithm. Another improvement is to start the iteration with a better version, which needs only (1+ε)n space and again additional O(n) transports. The result is, that we can improve this theoretically up to n log n − 1.3289n comparisons in the worst case. This is close to the theoretical lower bound of n log n − 1.443n. The total number of transports in all these versions can be reduced to ε n log n+O(1) for any ε> 0. 1
Tight Bounds for Searching a Sorted Array of Strings
, 2000
"... Given a kcharacter query string and an array of n strings arranged in lexicographical order, computing the rank of the query string among the n strings or deciding whether it occurs in the array requires the inspection of 1 ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Given a kcharacter query string and an array of n strings arranged in lexicographical order, computing the rank of the query string among the n strings or deciding whether it occurs in the array requires the inspection of 1
Sorting and/by Merging Finger Trees
 In Algorithms and Computation: Third International Symposium, ISAAC ’92
, 1992
"... : We describe a sorting algorithm that is optimally adaptive with respect to several important measures of presortedness. In particular, the algorithm requires O(n+k log k) time on nsequences X that have a longest ascending subsequence of length n \Gamma k and for which Rem(X) = k; O(n log(k=n)) ti ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
: We describe a sorting algorithm that is optimally adaptive with respect to several important measures of presortedness. In particular, the algorithm requires O(n+k log k) time on nsequences X that have a longest ascending subsequence of length n \Gamma k and for which Rem(X) = k; O(n log(k=n)) time on sequences with k inversions; and O(n log k) time on sequences that can be decomposed into k monotone shuffles. The algorithm makes use of an adaptive merging operation that can be implemented using finger search trees. 1 Introduction An adaptive algorithm is one which requires fewer resources to solve `easy' problem instances than it does to solve `hard'. For sorting an adaptive algorithm should run in O(n) time if presented with a sorted nsequence, and in O(n log n) time for all n sequences, with the time for any particular sequence depending upon the `nearness' of the sequence to being sorted. Mannila [7] established the notion of a measure of presortedness to quantify the disord...
Optimal Median Smoothing
, 1994
"... Median smoothing of a series of data values is considered. Naive programming of such an algorithm would result in large amount of computation, especially when the series of data values is long. By maintaining a heap structure that we update when moving along the data we obtain an optimal median smoo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Median smoothing of a series of data values is considered. Naive programming of such an algorithm would result in large amount of computation, especially when the series of data values is long. By maintaining a heap structure that we update when moving along the data we obtain an optimal median smoothing algorithm.
Using Space Filling Curves for Efficient Contact Searching
 In Proc. IMACS
, 2000
"... An efficient contact search for dynamic explicit finite element (FE) simulations with several moving bodies is essential to avoid unacceptable costs. The time spent for contact algorithms is mainly determined by the cost of the search phase. We present a variant of the position code algorithm for ef ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An efficient contact search for dynamic explicit finite element (FE) simulations with several moving bodies is essential to avoid unacceptable costs. The time spent for contact algorithms is mainly determined by the cost of the search phase. We present a variant of the position code algorithm for efficient global contact search. Instead of a rowwise ordering we propose a numbering that follows a space filling curve. An analysis proves that the new ordering is more efficient in case of long surface segments. The two variants of the position code algorithm and another widely used algorithm, which is based on a hierarchical ordering, are implemented in an industrial FE code and applied to problems in fastening and demolition technology. Experimental results show that the presented method is well suited for all types of meshes and meets the great demands on e#ciency for the mentioned field while the other algorithms show disadvantages in some cases. Key words: finite elements, contact ...
Backtracking
"... Contents 1 Introduction 3 2 Models of computation 6 3 The Set Union Problem 9 4 The WorstCase Time Complexity of a Single Operation 15 5 The Set Union Problem with Deunions 18 6 Split and the Set Union Problem on Intervals 22 7 The Set Union Problem with Unlimited Backtracking 26 1 Introduction A ..."
Abstract
 Add to MetaCart
Contents 1 Introduction 3 2 Models of computation 6 3 The Set Union Problem 9 4 The WorstCase Time Complexity of a Single Operation 15 5 The Set Union Problem with Deunions 18 6 Split and the Set Union Problem on Intervals 22 7 The Set Union Problem with Unlimited Backtracking 26 1 Introduction An equivalence relation on a finite set S is a binary relation that is reflexive symmetric and transitive. That is, for s; t and u in S, we have that sRs, if sRt then tRs, and if sRt and tRu then sRu. Set S is partitioned by R into equivalence classes where each class cointains all and only the elements that obey R pairwise. Many computational problems involve representing, modifying and tracking the evolution of equivalenc