Results 1  10
of
73
Suffix arrays: A new method for online string searches
 SIAM J. Comput
, 1993
"... Abstract. A new and conceptually simple data structure, called a suffix array, for online string searches is introduced in this paper. Constructing and querying suffix arrays is reduced to a sort and search paradigm that employs novel algorithms. The main advantage of suffix arrays over suffix tree ..."
Abstract

Cited by 642 (1 self)
 Add to MetaCart
Abstract. A new and conceptually simple data structure, called a suffix array, for online string searches is introduced in this paper. Constructing and querying suffix arrays is reduced to a sort and search paradigm that employs novel algorithms. The main advantage of suffix arrays over suffix trees is that, in practice, they use three to five times less space. From a complexity standpoint, suffix arrays permit online string searches of the type, "Is W a substring of A? " to be answered in time O(P + log N), where P is the length of W and N is the length of A, which is competitive with (and in some cases slightly better than) suffix trees. The only drawback is that in those instances where the underlying alphabet is finite and small, suffix trees can be constructed in O (N) time in the worst case, versus O (N log N) time for suffix arrays. However, an augmented algorithm is given that, regardless of the alphabet size, constructs suffix arrays in O (N) expected time, albeit with lesser.space efficiency. It is believed that suffix arrays will prove to be better in practice than suffix trees for many applications.
Dynamic storage allocation: A survey and critical review
, 1995
"... Dynamic memory allocation has been a fundamental part of most computer systems since roughly 1960, and memory allocation is widely considered to be either a solved problem or an insoluble one. In this survey, we describe a variety of memory allocator designs and point out issues relevant to their de ..."
Abstract

Cited by 206 (6 self)
 Add to MetaCart
Dynamic memory allocation has been a fundamental part of most computer systems since roughly 1960, and memory allocation is widely considered to be either a solved problem or an insoluble one. In this survey, we describe a variety of memory allocator designs and point out issues relevant to their design and evaluation. We then chronologically survey most of the literature on allocators between 1961 and 1995. (Scores of papers are discussed, in varying detail, and over 150 references are given.) We argue that allocator designs have been unduly restricted by an emphasis on mechanism, rather than policy, while the latter is more important; higherlevel strategic issues are still more important, but have not been given much attention. Most theoretical analyses and empirical allocator evaluations to date have relied on very strong assumptions of randomness and independence, but real program behavior exhibits important regularities that must be exploited if allocators are to perform well in practice.
Randomized Search Trees
 ALGORITHMICA
, 1996
"... We present a randomized strategy for maintaining balance in dynamically changing search trees that has optimal expected behavior. In particular, in the expected case a search or an update takes logarithmic time, with the update requiring fewer than two rotations. Moreover, the update time remains ..."
Abstract

Cited by 139 (1 self)
 Add to MetaCart
We present a randomized strategy for maintaining balance in dynamically changing search trees that has optimal expected behavior. In particular, in the expected case a search or an update takes logarithmic time, with the update requiring fewer than two rotations. Moreover, the update time remains logarithmic, even if the cost of a rotation is taken to be proportional to the size of the rotated subtree. Finger searches and splits and joins can be performed in optimal expected time also. We show that these results continue to hold even if very little true randomness is available, i.e. if only a logarithmic number of truely random bits are available. Our approach generalizes naturally to weighted trees, where the expected time bounds for accesses and updates again match the worst case time bounds of the best deterministic methods. We also discuss ways of implementing our randomized strategy so that no explicit balance information is maintained. Our balancing strategy and our alg...
A functional approach to data structures and its use in multidimensional searching
 SIAM J. Comput
, 1988
"... Abstract. We establish new upperbounds on the complexity ofmultidimensional 3earching. Our results include, in particular, linearsize data structures for range and rectangle counting in two dimensions with logarithmic query time. More generally, we give improved data structures for rectangle proble ..."
Abstract

Cited by 132 (3 self)
 Add to MetaCart
Abstract. We establish new upperbounds on the complexity ofmultidimensional 3earching. Our results include, in particular, linearsize data structures for range and rectangle counting in two dimensions with logarithmic query time. More generally, we give improved data structures for rectangle problems in any dimension, in a static as well as a dynamic setting. Several ofthe algorithms we give are simple to implement and might be the solutions of choice in practice. Central to this paper is the nonstandard approach followed to achieve these results. At its rootwe find a redefinition ofdata structures interms offunctional specifications.
The Measured Cost of Conservative Garbage Collection
 Software Practice and Experience
, 1993
"... this paper, I evaluate the costs of different dynamic storage management algorithms, including domainspecific allocators, widelyused generalpurpose allocators, and a publicly available conservative garbage collection algorithm. Surprisingly, I find that programmer enhancements often have little ef ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
this paper, I evaluate the costs of different dynamic storage management algorithms, including domainspecific allocators, widelyused generalpurpose allocators, and a publicly available conservative garbage collection algorithm. Surprisingly, I find that programmer enhancements often have little effect on program performance. I also find that the true cost of conservative garbage collection is not the CPU overhead, but the memory system overhead of the algorithm. I conclude that conservative garbage collection is a promising alternative to explicit storage management and that the performance of conservative collection is likely to improve in the future. C programmers should now seriously consider using conservative garbage collection instead of explicitly calling free in programs they write
Nearest Common Ancestors: A survey and a new distributed algorithm
, 2002
"... Several papers describe linear time algorithms to preprocess a tree, such that one can answer subsequent nearest common ancestor queries in constant time. Here, we survey these algorithms and related results. A common idea used by all the algorithms for the problem is that a solution for complete ba ..."
Abstract

Cited by 76 (12 self)
 Add to MetaCart
Several papers describe linear time algorithms to preprocess a tree, such that one can answer subsequent nearest common ancestor queries in constant time. Here, we survey these algorithms and related results. A common idea used by all the algorithms for the problem is that a solution for complete balanced binary trees is straightforward. Furthermore, for complete balanced binary trees we can easily solve the problem in a distributed way by labeling the nodes of the tree such that from the labels of two nodes alone one can compute the label of their nearest common ancestor. Whether it is possible to distribute the data structure into short labels associated with the nodes is important for several applications such as routing. Therefore, related labeling problems have received a lot of attention recently.
Varieties of Increasing Trees
, 1992
"... An increasing tree is a labelled rooted tree in which labels along any branch from the root go in increasing order. Under various guises, such trees have surfaced as tree representations of permutations, as data structures in computer science, and as probabilistic models in diverse applications. We ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
An increasing tree is a labelled rooted tree in which labels along any branch from the root go in increasing order. Under various guises, such trees have surfaced as tree representations of permutations, as data structures in computer science, and as probabilistic models in diverse applications. We present a unified generating function approach to the enumeration of parameters on such trees. The counting generating functions for several basic parameters are shown to be related to a simple ordinary differential equation which is non linear and autonomous. Singularity analysis applied to the intervening generating functions then permits to analyze asymptotically a number of parameters of the trees, like: root degree, number of leaves, path length, and level of nodes. In this way it is found that various models share common features: path length is O(n log n), the distributions of node levels and number of leaves are asymptotically normal, etc.
Vmalloc: A General and Efficient Memory Allocator
, 1996
"... Introduction Dynamic memory allocation is an integral part of programming. Programs in C and C++ (via constructors and destructors) routinely allocate memory using the familiar ANSIC standard interface malloc established around 1979 by Doug McIlroy. Malloc manipulates heap memory using the functi ..."
Abstract

Cited by 47 (7 self)
 Add to MetaCart
Introduction Dynamic memory allocation is an integral part of programming. Programs in C and C++ (via constructors and destructors) routinely allocate memory using the familiar ANSIC standard interface malloc established around 1979 by Doug McIlroy. Malloc manipulates heap memory using the functions malloc(s) to allocate a block of size s, free(b) to free a previously allocated block b, and realloc(b,s) to resize a block b to size s. No optimal solution to dynamic memory allocation exists [1, 2, 3] so, over the years, many malloc implementations were proposed with different tradeoffs in time and space efficiency. A study by David Korn and Phong Vo in 1985 presented and compared 11 malloc versions. Only a few of these survived the test of time. The first widely used malloc was written by McIlroy and became part of many Bell Labs Research and System V versions of the UNIX system. This malloc is based on a firstfit strategy and can be significantly slow in large memories. C. King
CustoMalloc: Efficient Synthesized Memory Allocators
 SOFTWAREâ€”PRACTICE AND EXPERIENCE
, 1993
"... ... In this paper, we describe a program (CustoMalloc) that synthesizes a memory allocator customized for a specific application. Our experiments show that the synthesized allocators are uniformly faster and more space efficient than the Berkeley UNIX allocator. Constructing a custom allocator requi ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
... In this paper, we describe a program (CustoMalloc) that synthesizes a memory allocator customized for a specific application. Our experiments show that the synthesized allocators are uniformly faster and more space efficient than the Berkeley UNIX allocator. Constructing a custom allocator requires little programmer effort, usually taking only a few minutes. Experience has shown that the synthesized allocators are not overly sensitive to properties of input sets and the resulting allocators are superior even to domainspecific allocators designed by programmers. Measurements show that synthesized allocators are from two to ten times faster than widelyused allocators
Optimal Doubly Logarithmic Parallel Algorithms Based On Finding All Nearest Smaller Values
, 1993
"... The all nearest smaller values problem is defined as follows. Let A = (a 1 ; a 2 ; : : : ; an ) be n elements drawn from a totally ordered domain. For each a i , 1 i n, find the two nearest elements in A that are smaller than a i (if such exist): the left nearest smaller element a j (with j ! i) a ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
The all nearest smaller values problem is defined as follows. Let A = (a 1 ; a 2 ; : : : ; an ) be n elements drawn from a totally ordered domain. For each a i , 1 i n, find the two nearest elements in A that are smaller than a i (if such exist): the left nearest smaller element a j (with j ! i) and the right nearest smaller element a k (with k ? i). We give an O(log log n) time optimal parallel algorithm for the problem on a CRCW PRAM. We apply this algorithm to achieve optimal O(log log n) time parallel algorithms for four problems: (i) Triangulating a monotone polygon, (ii) Preprocessing for answering range minimum queries in constant time, (iii) Reconstructing a binary tree from its inorder and either preorder or postorder numberings, (vi) Matching a legal sequence of parentheses. We also show that any optimal CRCW PRAM algorithm for the triangulation problem requires \Omega\Gammauir log n) time. Dept. of Computing, King's College London, The Strand, London WC2R 2LS, England. ...