Results

**11 - 19**of**19**### Finger Indexed Sets: New Approaches

, 2008

"... In the particular case we have insertions/deletions at the tail of a given set S of n one-dimensional elements, we present a simpler and more concrete algorithm than that presented in [Anderson, 2007] achieving the same (but also amortized) upper bound of O ( √ logd/loglog d) for finger searching ..."

Abstract
- Add to MetaCart

In the particular case we have insertions/deletions at the tail of a given set S of n one-dimensional elements, we present a simpler and more concrete algorithm than that presented in [Anderson, 2007] achieving the same (but also amortized) upper bound of O ( √ logd/loglog d) for finger searching queries, where d is the number of sorted keys between the finger element and the target element we are looking for. Furthermore, in general case we have insertions/deletions anywhere we present a new randomized algorithm achieving the same expected time bounds. Even the new solutions achieve the optimal bounds in amortized or expected case, the advantage of simplicity is of great importance due to practical merits we gain.

### Worst Case Efficient Data Structures for Priority Queues and Deques with Heap Order

"... An efficient amortized data structure is one that ensures that the average time per operation spent on processing any sequence of operations is small. Amortized data structures typically have very non-uniform response times, i.e., individual operations can be occasionally and unpredictably slow, alt ..."

Abstract
- Add to MetaCart

An efficient amortized data structure is one that ensures that the average time per operation spent on processing any sequence of operations is small. Amortized data structures typically have very non-uniform response times, i.e., individual operations can be occasionally and unpredictably slow, although the average time over a sequence is kept small by completing most of the other operations quickly. This makes amortized data structures unsuitable in many important contexts, such as real time systems, parallel programs, persistent data structures and interactive software. On the other hand, an efficient worst case data structure guarantees that every operation will be performed quickly.

### The Role of Lazy Evaluation in Amortized Data Structures

"... 1 Introduction Functional programmers have long debated the relative merits of strict versus lazy evaluation. Although lazy evaluation has many benefits [11], strict evaluation is clearly superior in at least one area: ease of reasoning about asymptotic complexity. Because of the unpredictable natur ..."

Abstract
- Add to MetaCart

(Show Context)
1 Introduction Functional programmers have long debated the relative merits of strict versus lazy evaluation. Although lazy evaluation has many benefits [11], strict evaluation is clearly superior in at least one area: ease of reasoning about asymptotic complexity. Because of the unpredictable nature of lazy evaluation, it is notoriously difficult to reason about the complexity of algorithms in such a language. However, there are some algorithms based on lazy evaluation that cannot be programmed in (pure) strict languages without an increase in asymptotic complexity. We explore one class of such algorithms-- amortized data structures-- and describe techniques for reasoning about their complexity.

### Optimal Solutions for the Temporal Precedence Problem

"... In this paper we refer to the Temporal Precedence Problem on Pure Pointer Machines. This problem asks for the design of a data structure, maintaining a set of stored elements and supporting the following two operations: insert and precedes. The operation insert(a) introduces a new element a in th ..."

Abstract
- Add to MetaCart

(Show Context)
In this paper we refer to the Temporal Precedence Problem on Pure Pointer Machines. This problem asks for the design of a data structure, maintaining a set of stored elements and supporting the following two operations: insert and precedes. The operation insert(a) introduces a new element a in the structure, while the operation precedes(a, b) returns true i# element a was inserted before element b temporally. In Ranjan et al. a solution was provided to the problem with worst-case time complexity O(log log n) per operation and O(n log log n) space, where n is the number of elements inserted. It was also demonstrated that the precedes operation has a lower bound of# (log log n) for the Pure Pointer Machine model of computation. In this paper we present two simple solutions with linear space and worst-case constant insertion time. In addition, we describe two algorithms that can handle the precedes(a, b) operation in O(log log d) time, where d is the temporal distance between the elements a and b.

### NEFOS: Rapid Cache-Aware Range Query Processing with Probabilistic Guarantees

"... Abstract. We present NEFOS (NEsted FOrest of balanced treeS), a new cache-aware indexing scheme that supports insertions and deletions in O(1) worst-case block transfers for rebalancing operations (given and update position) and searching in O(log B log n) expected block transfers, (B = disk block s ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract. We present NEFOS (NEsted FOrest of balanced treeS), a new cache-aware indexing scheme that supports insertions and deletions in O(1) worst-case block transfers for rebalancing operations (given and update position) and searching in O(log B log n) expected block transfers, (B = disk block size and n = number of stored elements). The expected search bound holds with high probability for any (unknown) realistic input distribution. Our expected search bound constitutes an improvement over the O(log B log n) expected bound for search achieved by the ISB-tree (Interpolation Search B-tree), since the latter holds with high probability for the class of smooth only input distributions. We define any unknown distribution as realistic if the smoothness doesn’t appear in the whole data set, still it may appear locally in small spatial neighborhoods. This holds for a variety of real-life non-smooth distributions like skew, zipfian, powlaw, beta e.t.c.. The latter is also verified by an accompanying experimental study. Moreover, NEFOS is a B-parametrized concrete structure, which works for both I/O and RAM model, without any kind of transformation or adaptation. Also, it is the first time an expected sub-logarithmic bound for search operation was achieved for a broad family of non-smooth input distributions. Keywords: Data Structures, Data Management Algorithms. 1