Results 1 
3 of
3
RedBlack Trie Hashing
, 1995
"... Trie hashing is a scheme, proposed by Litwin, for indexing records with very long alphanumeric keys. The records are grouped into buckets of capacity b and maintained on secondary storage. To retrieve a record, the memory resident trie is traversed from the root to a leaf node where the address of t ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Trie hashing is a scheme, proposed by Litwin, for indexing records with very long alphanumeric keys. The records are grouped into buckets of capacity b and maintained on secondary storage. To retrieve a record, the memory resident trie is traversed from the root to a leaf node where the address of the target bucket is found. Using the address found, the data bucket is read into memory and searched to determine the presence or absence of the record. The scheme, for all practical purposes, locates a record in one or two disk accesses. Unlike a trie, the scheme proposed suffers from potential degeneracy when the keys inserted are ordered and has an expensive reconstruction cost if a system failure occurs during a session. We present a new approach to implementing Trie Hashing that resolves the degeneracy problem. Our approach combines the basic trie hashing algorithm with the balancing techniques of the RedBlack Binary Search Tree, to produce a relatively balanced trie hashing scheme. As...
RedBlack Balanced Trie Hashing
, 1995
"... Trie hashing is a scheme, proposed by Litwin, for indexing records with very long alphanumeric keys. The records are grouped into buckets of capacity b records per bucket and maintained on secondary storage. To retrieve a record, the memory resident trie is traversed from the root to a leaf node whe ..."
Abstract
 Add to MetaCart
Trie hashing is a scheme, proposed by Litwin, for indexing records with very long alphanumeric keys. The records are grouped into buckets of capacity b records per bucket and maintained on secondary storage. To retrieve a record, the memory resident trie is traversed from the root to a leaf node where the address of the target bucket is found. Using the address found, the data bucket is read into memory and searched to determine the presence or absence of the record. The scheme, for all practical purposes, locates a record in one or two disk accesses. Unlike a trie, the scheme suffers from: i) potential degeneracy when the keys inserted are ordered, ii) expensive reconstruction cost if a system failure occurs during a session. We present a new approach to implementing Trie Hashing that resolves the problem of potential degeneracy. Our approach combines the basic trie hashing algorithm with the balancing techniques of the RedBlack Binary Search Tree, to produce a relatively balanced tr...
Reconstruction Problem
, 1987
"... this paper we shall study two models of secondary memory. The lb'st is the Seauenfial Model. In this model, information is stored in secondary memory as one sequential file. If we want to update this information, we have 3 to replace the entire f'fic by the updated one. So in this model, after an u ..."
Abstract
 Add to MetaCart
this paper we shall study two models of secondary memory. The lb'st is the Seauenfial Model. In this model, information is stored in secondary memory as one sequential file. If we want to update this information, we have 3 to replace the entire f'fic by the updated one. So in this model, after an update of the data structure, wc have to replace the entire shadow administration. However, often only a small pan of the shadow administration will actually have changed after an UlXiatc. E.g., if our shadow structure is a balanced binary search tree, then an insertion or a deletion changes only O(log n) of the O(n) space: Therefore, in the second model, the Indexed Sco_ucntial Model, wc assume that secondary memory consists of blocks. There is the ability of replacing a block by another one. Hence in this model wc can maintain a shadow administration, just by replacing the actually changed blocks. Both models arc realistic in practice. The first corresponds to the nodon of sequential files, the second to indexed sequential files. It tums out that in both models it is possible to obtain efficient shadow administrations for data structures solving different types of searching problems. The emphasis in this paper will be on solutions for order decomposable set problems (see Overmars [5]) and decomposable searching problems (see e.g. Bcndey [1]). (The Indexed Sequential Model is also used in Overmars ctal. [6], where the maintenance in secondary memory of range trees is investigated.) This paper is organized as follows. In Section 1.2, searching problems and complexity measures for their solutions are introduced. In Section 1.3, the general approach we use to design solutions to the reconstruction problem is given. We introduce complexity measures in which the performances of s...