Results 1  10
of
10
Optimal spacetime dictionaries over an unbounded universe with flat implicit trees
, 2003
"... In the classical dictionary problem, a set of n distinct keys over an unbounded and ordered universe is maintained under insertions and deletions of individual keys while supporting search operations. An implicit dictionary has the additional constraint of occupying the space merely required by stor ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In the classical dictionary problem, a set of n distinct keys over an unbounded and ordered universe is maintained under insertions and deletions of individual keys while supporting search operations. An implicit dictionary has the additional constraint of occupying the space merely required by storing the n keys, that is, exactly n contiguous words of space in total. All what is known is the starting position of the memory segment hosting the keys, as the rest of the information is implicitly encoded by a suitable permutation of the keys. This paper describes the
at implicit tree, which is the rst implicit dictionary requiring O(log n) time per search and update operation.
Implicit Btrees: A New Data Structure for the Dictionary Problem*
, 2004
"... Abstract An implicit data structure for the dictionary problem maintains n data values in the first n locations of an array in such a way that it efficiently supports the operations insert, delete and search. No information other than that in O(1) memory cells and in the input data is to beretained ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract An implicit data structure for the dictionary problem maintains n data values in the first n locations of an array in such a way that it efficiently supports the operations insert, delete and search. No information other than that in O(1) memory cells and in the input data is to beretained; and the only operations performed on the data values (other than reads and writes) are comparisons. This paper describes the Implicit Btree, a new data structure supporting theseoperations in O(logB n) block transfers like in regular Btrees, under the realistic assumptionthat a block stores B = \Omega (log n) keys, so that reporting r consecutive keys in sorted order hasa cost of O(logB n + r/B) block transfers. En route a number of space efficient techniques forhandling segments of a large array in a memory hierarchy are developed. Being implicit, the proposed data structure occupies exactly dn/Be blocks of memory after each update, where nis the number of keys after each update and B is the number of keys contained in a memoryblock. In main memory, the time complexity of the operations is
Optimal TimeSpace TradeOffs for NonComparisonBased Sorting ∗
"... We study the problem of sorting n integers of w bits on a unitcost RAM with word size w, and in particular consider the timespace tradeoff (product of time and space in bits) for this problem. For comparisonbased algorithms, the timespace complexity is known to be Θ(n 2). A result of Beame show ..."
Abstract
 Add to MetaCart
(Show Context)
We study the problem of sorting n integers of w bits on a unitcost RAM with word size w, and in particular consider the timespace tradeoff (product of time and space in bits) for this problem. For comparisonbased algorithms, the timespace complexity is known to be Θ(n 2). A result of Beame shows that the lower bound also holds for noncomparisonbased algorithms, but no algorithm has met this for time below the comparisonbased Ω(n lg n) lower bound. We show that if sorting within some time bound ˜ T is possible, then time T = O ( ˜ T + n lg ∗ n) can be achieved with high probability using space S = O(n 2 /T + w), which is optimal. Given a deterministic priority queue using amortized time t(n) per operation and space n O(1) , we provide a deterministic algorithm sorting in time T = O(n (t(n) +lg ∗ n)) with S = O(n 2 /T + w). Both results require that w ≤ n 1−Ω(1). Using existing priority queues and sorting algorithms, this implies that we can deterministically sort timespace optimally in time Θ(T)forT ≥n(lg lg n) 2, and with high probability for T ≥ n lg lg n. Our results imply that recent space lower bounds for deciding element distinctness in o(n lg n) time are nearly tight. 1
Abstract
, 2005
"... Questions about order versus disorder in systems and models have been fascinating scientists over the years. In Computer Science, order is intimately related to sorting, commonly meant as the task of arranging keys in increasing or decreasing order with respect to an underlying total order relation. ..."
Abstract
 Add to MetaCart
Questions about order versus disorder in systems and models have been fascinating scientists over the years. In Computer Science, order is intimately related to sorting, commonly meant as the task of arranging keys in increasing or decreasing order with respect to an underlying total order relation. The sorted organization is amenable for searching a set of n keys, since each search requires Θ(log n) comparisons in the worst case, which is optimal if the cost of a single comparison can be considered a constant. Nevertheless, we prove that disorder implicitly provides more information than order does. For the general case of searching an array of multidimensional keys, whose comparison cost is proportional to their length (and hence cannot be considered a constant), we demonstrate that “suitable ” disorder gives better bounds than those derivable by using the natural lexicographic order. We start out from previous work done by Andersson, Hagerup, H˚astad and Petersson [SIAM Journal on Computing, 30(2), 2001], who proved that k log log n
Electronic Colloquium on Computational Complexity, Report No. 85 (2004) Selection from Structured Data Sets
"... A large body of work studies the complexity of selecting the jth largest element in an arbitrary set of n elements (a.k.a. the select(j) operation). In this work, we study the complexity of select in data that is partially structured by an initial preprocessing stage and in a data structure that is ..."
Abstract
 Add to MetaCart
A large body of work studies the complexity of selecting the jth largest element in an arbitrary set of n elements (a.k.a. the select(j) operation). In this work, we study the complexity of select in data that is partially structured by an initial preprocessing stage and in a data structure that is dynamically maintained. We provide lower and upper bounds in the comparison based model. For preprocessing, we show that making at most α(n) · n comparisons during preprocessing (before the rank j is provided) implies that select(j) must make at least (2 + ɛ)(n/e2 α(n) ) comparisons in the worst case, where ɛ> 2 −40. For dynamically maintained data structures, we show that if the amortized number of comparisons executed with each insert operation is bounded by i(n), then select(j) must make at least (2 + ɛ)(n/e2 i(n) ) comparisons in the worst case, no matter how costly the other data structure operations are. When only insert is used, we provide a lower bound on the complexity of findmedian. This lower bound is much higher than the complexity of maintaining the minimum, thus formalizing the intuitive difference between findmin and findmedian. Finally, we present a new explicit adversary for comparison based algorithms and use it to show adversary lower bounds for selection problems. We demonstrate the power of this adversary by improving the best known lower bound for the findany operation in a data structure and by slightly improving the best adversary lower bound for sorting. 1
Optimal implicit dictionaries over . . .
, 2005
"... An array of n distinct keys can be sorted for logarithmic searching or can be organized as a heap for logarithmic updating, but it is unclear how to attain logarithmic time for both searching and updating. This natural question dates back to the heap of Williams and Floyd in the sixties and relates ..."
Abstract
 Add to MetaCart
An array of n distinct keys can be sorted for logarithmic searching or can be organized as a heap for logarithmic updating, but it is unclear how to attain logarithmic time for both searching and updating. This natural question dates back to the heap of Williams and Floyd in the sixties and relates to the fundamental issue whether additional space besides those for the keys gives more computational power in dictionaries and how data ordering helps. Implicit data structures have been introduced in the eighties with this goal, providing the best bound of O(log² n) time, until a recent result showing O(log² n / log log n) time. In this paper we describe the flat implicit tree, which is the first data structure obtaining O(log n) time for search and (amortized) update using an array of n cells.
Sliding Windows with Limited Storage
, 2013
"... The results of this paper are superceded by the paper at: http://arxiv.org/abs/1309.3690. We consider timespace tradeoffs for exactly computing frequency moments and order statistics over sliding windows [16]. Given an input of length 2n − 1, the task is to output the function of each window of len ..."
Abstract
 Add to MetaCart
The results of this paper are superceded by the paper at: http://arxiv.org/abs/1309.3690. We consider timespace tradeoffs for exactly computing frequency moments and order statistics over sliding windows [16]. Given an input of length 2n − 1, the task is to output the function of each window of length n, giving n outputs in total. Computations over sliding windows are related to direct sum problems except that inputs to instances almost completely overlap. • We show an average case and randomized timespace tradeoff lower bound of T · S ∈ Ω(n2) for multiway branching programs, and hence standard RAM and wordRAM models, to compute the number of distinct elements, F0, in sliding windows over alphabet [n]. The same lower bound holds for computing the loworder bit of F0 and computing any frequency moment Fk for k 6 = 1. We complement this lower bound with a T · S ∈ Õ(n2) deterministic RAM algorithm for exactly computing Fk in sliding windows. • We show timespace separations between the complexity of slidingwindow element distinctness and that of slidingwindow F0 mod 2 computation. In particular for alphabet [n] there is a very simple errorless slidingwindow algorithm for element distinctness that runs in O(n) time on average and uses O(log n) space. • We show that any algorithm for a single element distinctness instance can be extended to an algorithm for the slidingwindow version of element distinctness with at most a polylogarithmic increase in the timespace product. ar X iv