Results 1 
7 of
7
A Survey of Adaptive Sorting Algorithms
, 1992
"... Introduction and Survey; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems  Sorting and Searching; E.5 [Data]: Files  Sorting/searching; G.3 [Mathematics of Computing]: Probability and Statistics  Probabilistic algorithms; E.2 [Data Storage Represe ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
Introduction and Survey; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems  Sorting and Searching; E.5 [Data]: Files  Sorting/searching; G.3 [Mathematics of Computing]: Probability and Statistics  Probabilistic algorithms; E.2 [Data Storage Representation]: Composite structures, linked representations. General Terms: Algorithms, Theory. Additional Key Words and Phrases: Adaptive sorting algorithms, Comparison trees, Measures of disorder, Nearly sorted sequences, Randomized algorithms. A Survey of Adaptive Sorting Algorithms 2 CONTENTS INTRODUCTION I.1 Optimal adaptivity I.2 Measures of disorder I.3 Organization of the paper 1.WORSTCASE ADAPTIVE (INTERNAL) SORTING ALGORITHMS 1.1 Generic Sort 1.2 CookKim division 1.3 Partition Sort 1.4 Exponential Search 1.5 Adaptive Merging 2.EXPECTEDCASE ADAPTIV
Compressed representations of permutations, and applications
 SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
"... We explore various techniques to compress a permutation π over n integers, taking advantage of ordered subsequences in π, while supporting its application π(i) and the application of its inverse π −1 (i) in small time. Our compression schemes yield several interesting byproducts, in many cases mat ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
We explore various techniques to compress a permutation π over n integers, taking advantage of ordered subsequences in π, while supporting its application π(i) and the application of its inverse π −1 (i) in small time. Our compression schemes yield several interesting byproducts, in many cases matching, improving or extending the best existing results on applications such as the encoding of a permutation in order to support iterated applications π k (i) of it, of integer functions, and of inverted lists and suffix arrays.
Alphabet Partitioning for Compressed Rank/Select and Applications
"... Abstract. We present a data structure that stores a string s[1..n] over the alphabet [1..σ] in nH0(s) + o(n)(H0(s)+1) bits, where H0(s) is the zeroorder entropy of s. This data structure supports the queries access and rank in time O (lg lg σ), and the select query in constant time. This result imp ..."
Abstract

Cited by 18 (13 self)
 Add to MetaCart
Abstract. We present a data structure that stores a string s[1..n] over the alphabet [1..σ] in nH0(s) + o(n)(H0(s)+1) bits, where H0(s) is the zeroorder entropy of s. This data structure supports the queries access and rank in time O (lg lg σ), and the select query in constant time. This result improves on previously known data structures using nH0(s) + o(n lg σ) bits, where on highly compressible instances the redundancy o(n lg σ) cease to be negligible compared to the nH0(s) bits that encode the data. The technique is based on combining previous results through an ingenious partitioning of the alphabet, and practical enough to be implementable. It applies not only to strings, but also to several other compact data structures. For example, we achieve (i) faster search times and lower redundancy for the smallest existing fulltext selfindex; (ii) compressed permutations π with times for π() and π −1 () improved to loglogarithmic; and (iii) the first compressed representation of dynamic collections of disjoint sets. 1
On Compressing Permutations and Adaptive Sorting
, 2013
"... We prove that, given a permutation π over [1..n] formed of nRuns sorted blocks of sizes given by the vector R = 〈r1,..., rnRuns〉, there exists a compressed data structure encoding π in n(1 + H(R)) = n + ∑nRuns i=1 ri n log2 ri n(1 + log2 nRuns) bits while supporting access to the values of π() and ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We prove that, given a permutation π over [1..n] formed of nRuns sorted blocks of sizes given by the vector R = 〈r1,..., rnRuns〉, there exists a compressed data structure encoding π in n(1 + H(R)) = n + ∑nRuns i=1 ri n log2 ri n(1 + log2 nRuns) bits while supporting access to the values of π() and π−1 () in time O(log nRuns / log log n) in the worst case and O(H(R) / log log n) on average, when the argument is uniformly distributed over [1..n]. This data structure can be constructed in time O(n(1 + H(R))), which yields an improved adaptive sorting algorithm. Similar results on compressed data structures for permutations and adaptive sorting algorithms are proved for other preorder measures of practical and theoretical interest.
LRMTrees: Compressed Indices, Adaptive Sorting, and Compressed Permutations ⋆
"... Abstract. LRMTrees are an elegant way to partition a sequence of values into sorted consecutive blocks, and to express the relative position of the first element of each block within a previous block. They were used to encode ordinal trees and to index integer arrays in order to support range minim ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. LRMTrees are an elegant way to partition a sequence of values into sorted consecutive blocks, and to express the relative position of the first element of each block within a previous block. They were used to encode ordinal trees and to index integer arrays in order to support range minimum queries on them. We describe how they yield many other convenient results in a variety of areas: compressed succinct indices for range minimum queries on partially sorted arrays; a new adaptive sorting algorithm; and a compressed succinct data structure for permutations supporting direct and inverse application in time inversely proportional to the permutation’s compressibility. 1
Efficient FullyCompressed Sequence Representations
, 2010
"... We present a data structure that stores a sequence s[1..n] over alphabet [1..σ] in nH0(s) + o(n)(H0(s)+1) bits, where H0(s) is the zeroorder entropy of s. This structure supports the queries access, rank and select, which are fundamental building blocks for many other compressed data structures, in ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We present a data structure that stores a sequence s[1..n] over alphabet [1..σ] in nH0(s) + o(n)(H0(s)+1) bits, where H0(s) is the zeroorder entropy of s. This structure supports the queries access, rank and select, which are fundamental building blocks for many other compressed data structures, in worstcase time O (lg lg σ) and average time O (lg H0(s)). The worstcase complexity matches the best previous results, yet these had been achieved with data structures using nH0(s) + o(n lg σ) bits. On highly compressible sequences the o(n lg σ) bits of the redundancy may be significant compared to the the nH0(s) bits that encode the data. Our representation, instead, compresses the redundancy as well. Moreover, our averagecase complexity is unprecedented. Our technique is based on partitioning the alphabet into characters of similar frequency. The subsequence corresponding to each group can then be encoded using fast uncompressed representations without harming the overall compression ratios, even in the redundancy. The result also improves upon the best current compressed representations of several other data structures. For example, we achieve (i) compressed redundancy, retaining the best time complexities, for the smallest existing fulltext selfindexes; (ii) compressed permutations π with times for π() and π −1 () improved to loglogarithmic; and (iii) the first compressed representation of dynamic collections of disjoint sets. We also point out various applications to inverted indexes, suffix arrays, binary relations, and data compressors. Our structure is practical on large alphabets. Our experiments show that, as predicted by theory, it dominates the space/time tradeoff map of all the sequence representations, both in synthetic and application scenarios.
An Adaptive Generic Sorting Algorithm that Uses Variable Partitioning
, 1996
"... A sorting algorithm is adaptive if its run time, for inputs of the same size n, varies smoothly from O(n) to O(n log n) as the disorder of the input varies. Recently, researchers have focused their attention on sorting algorithms that are optimally adaptive with respect to several measures of dis ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A sorting algorithm is adaptive if its run time, for inputs of the same size n, varies smoothly from O(n) to O(n log n) as the disorder of the input varies. Recently, researchers have focused their attention on sorting algorithms that are optimally adaptive with respect to several measures of disorder, since the type of disorder in the input is usually not known. They have demonstrated a need to develop tools for the construction of sorting algorithms that are adaptive with respect to large classes of measures. We present Partition Sort, a new generic sorting algorithm for which we can establish adaptivity. It is based on divideandconquer in which the number of subproblems at each level of recursion depends on the disorder of the input. We apply this generic algorithm to obtain optimal adaptivity for several measures of disorder. In addition, we introduce Randomized Partition Sort, a randomized version of Partition Sort, that not only is adaptive in the expected case, but ...