Results 1  10
of
18
The Level Ancestor Problem Simplified
"... We present a very simple algorithm for the Level Ancestor Problem. A Level Ancestor Query LA(v; d) requests the depth d ancestor of node v. The Level Ancestor Problem is thus: preprocess a given rooted tree T to answer level ancestor queries. While optimal solutions to this problem already exist ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
We present a very simple algorithm for the Level Ancestor Problem. A Level Ancestor Query LA(v; d) requests the depth d ancestor of node v. The Level Ancestor Problem is thus: preprocess a given rooted tree T to answer level ancestor queries. While optimal solutions to this problem already exist, our new optimal solution is simple enough to be taught and implemented.
Optimal Doubly Logarithmic Parallel Algorithms Based On Finding All Nearest Smaller Values
, 1993
"... The all nearest smaller values problem is defined as follows. Let A = (a 1 ; a 2 ; : : : ; an ) be n elements drawn from a totally ordered domain. For each a i , 1 i n, find the two nearest elements in A that are smaller than a i (if such exist): the left nearest smaller element a j (with j ! i) a ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
The all nearest smaller values problem is defined as follows. Let A = (a 1 ; a 2 ; : : : ; an ) be n elements drawn from a totally ordered domain. For each a i , 1 i n, find the two nearest elements in A that are smaller than a i (if such exist): the left nearest smaller element a j (with j ! i) and the right nearest smaller element a k (with k ? i). We give an O(log log n) time optimal parallel algorithm for the problem on a CRCW PRAM. We apply this algorithm to achieve optimal O(log log n) time parallel algorithms for four problems: (i) Triangulating a monotone polygon, (ii) Preprocessing for answering range minimum queries in constant time, (iii) Reconstructing a binary tree from its inorder and either preorder or postorder numberings, (vi) Matching a legal sequence of parentheses. We also show that any optimal CRCW PRAM algorithm for the triangulation problem requires \Omega\Gammauir log n) time. Dept. of Computing, King's College London, The Strand, London WC2R 2LS, England. ...
The Complexity of Computation on the Parallel Random Access Machine
, 1993
"... PRAMs also approximate the situation where communication to and from shared memory is much more expensive than local operations, for example, where each processor is located on a separate chip and access to shared memory is through a combining network. Not surprisingly, abstract PRAMs can be much m ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
PRAMs also approximate the situation where communication to and from shared memory is much more expensive than local operations, for example, where each processor is located on a separate chip and access to shared memory is through a combining network. Not surprisingly, abstract PRAMs can be much more powerful than restricted instruction set PRAMs. THEOREM 21.16 Any function of n variables can be computed by an abstract EROW PRAM in O(log n) steps using n= log 2 n processors and n=2 log 2 n shared memory cells. PROOF Each processor begins by reading log 2 n input values and combining them into one large value. The information known by processors are combined in a binarytreelike fashion. In each round, the remaining processors are grouped into pairs. In each pair, one processor communicates the information it knows about the input to the other processor and then leaves the computation. After dlog 2 ne rounds, one processor knows all n input values. Then this processor computes th...
Ultrafast expected time parallel algorithms
 Proc. of the 2nd SODA
, 1991
"... It has been shown previously that sorting n items into n locations with a polynomial number of processors requires Ω(log n/log log n) time. We sidestep this lower bound with the idea of Padded Sorting, or sorting n items into n + o(n) locations. Since many problems do not rely on the exact rank of s ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
It has been shown previously that sorting n items into n locations with a polynomial number of processors requires Ω(log n/log log n) time. We sidestep this lower bound with the idea of Padded Sorting, or sorting n items into n + o(n) locations. Since many problems do not rely on the exact rank of sorted items, a Padded Sort is often just as useful as an unpadded sort. Our algorithm for Padded Sort runs on the Tolerant CRCW PRAM and takes Θ(log log n/log log log n) expected time using n log log log n/log log n processors, assuming the items are taken from a uniform distribution. Using similar techniques we solve some computational geometry problems, including Voronoi Diagram, with the same processor and time bounds, assuming points are taken from a uniform distribution in the unit square. Further, we present an Arbitrary CRCW PRAM algorithm to solve the Closest Pair problem in constant expected time with n processors regardless of the distribution of points. All of these algorithms achieve linear speedup in expected time over their optimal serial counterparts. 1 Research done while at the University of Michigan and supported by an AT&T Fellowship.
Time and Space Efficient MethodLookup for ObjectOriented Programs (Extended Abstract)
 In Proceedings of the Seventh Annual ACMSIAM Symposium on Discrete Algorithms
, 1996
"... ) S. Muthukrishnan Martin Muller y DIMACS & Univ. of Warwick University of New Mexico 1 Introduction Objectoriented languages (OOLs) are becoming increasingly popular in software development (See [4, 11, 18, 20] on OOLs). The modular units in such languages are abstract data types called class ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
) S. Muthukrishnan Martin Muller y DIMACS & Univ. of Warwick University of New Mexico 1 Introduction Objectoriented languages (OOLs) are becoming increasingly popular in software development (See [4, 11, 18, 20] on OOLs). The modular units in such languages are abstract data types called classes, comprising data and functions (or selectors in the OOL parlance); each selector has possibly multiple implementations (or methods in OOL parlance) each in a different class. These languages support reusability of code/functions by allowing a class to inherit methods from its superclass in a hierarchical arrangement of the various classes. Therefore, when a selector s is invoked in a class c, the relevant method for s inherited by c has to be determined. That is the fundamental problem of methodlookup in objectoriented programs. Since nearly every statement of such programs calls for a methodlookup, efficient support of OOLs crucially relies on the methodlookup mechanism. The challen...
Optimal Logarithmic Time Randomized Suffix Tree Construction
 In Proc 23rd ICALP
, 1996
"... The su#x tree of a string, the fundamental data structure in the area of combinatorial pattern matching, has many elegant applications. In this paper, we present a novel, simple sequential algorithm for the construction of su#x trees. We are also able to parallelize our algorithm so that we settl ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
The su#x tree of a string, the fundamental data structure in the area of combinatorial pattern matching, has many elegant applications. In this paper, we present a novel, simple sequential algorithm for the construction of su#x trees. We are also able to parallelize our algorithm so that we settle the main open problem in the construction of su#x trees: we give a Las Vegas CRCW PRAM algorithm that constructs the su#x tree of a binary string of length n in O(log n) time and O(n) work with high probability. In contrast, the previously known workoptimal algorithms, while deterministic, take# (log n) time.
Structural Parallel Algorithmics
, 1991
"... The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledgebase on nonnumerical parallel algorithms can be characterized in a structural way ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledgebase on nonnumerical parallel algorithms can be characterized in a structural way. Each structure relates a few problems and technique to one another from the basic to the more involved. The second half of the paper provides a bird'seye view of such structures for: (1) list, tree and graph parallel algorithms; (2) very fast deterministic parallel algorithms; and (3) very fast randomized parallel algorithms. 1 Introduction Parallelism is a concern that is missing from "traditional" algorithmic design. Unfortunately, it turns out that most efficient serial algorithms become rather inefficient parallel algorithms. The experience is that the design of parallel algorithms requires new paradigms and techniques, offering an exciting intellectual challenge. We note that it had...
Perfect hashing for strings: Formalization and Algorithms
 IN PROC 7TH CPM
, 1996
"... Numbers and strings are two objects manipulated by most programs. Hashing has been wellstudied for numbers and it has been effective in practice. In contrast, basic hashing issues for strings remain largely unexplored. In this paper, we identify and formulate the core hashing problem for strings th ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Numbers and strings are two objects manipulated by most programs. Hashing has been wellstudied for numbers and it has been effective in practice. In contrast, basic hashing issues for strings remain largely unexplored. In this paper, we identify and formulate the core hashing problem for strings that we call substring hashing. Our main technical results are highly efficient sequential/parallel (CRCW PRAM) Las Vegas type algorithms that determine a perfect hash function for substring hashing. For example, given a binary string of length n, one of our algorithms finds a perfect hash function in O(log n) time, O(n) work, and O(n) space; the hash value for any substring can then be computed in O(log log n) time using a single processor. Our approach relies on a novel use of the suffix tree of a string. In implementing our approach, we design optimal parallel algorithms for the problem of determining weighted ancestors on a edgeweighted tree that may be of independent interest.
Efficient String Algorithmics
, 1992
"... Problems involving strings arise in many areas of computer science and have numerous practical applications. We consider several problems from a theoretical perspective and provide efficient algorithms and lower bounds for these problems in sequential and parallel models of computation. In the sequ ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Problems involving strings arise in many areas of computer science and have numerous practical applications. We consider several problems from a theoretical perspective and provide efficient algorithms and lower bounds for these problems in sequential and parallel models of computation. In the sequential setting, we present new algorithms for the string matching problem improving the previous bounds on the number of comparisons performed by such algorithms. In parallel computation, we present tight algorithms and lower bounds for the string matching problem, for finding the periods of a string, for detecting squares and for finding initial palindromes.
An Optimal Parallel Algorithm for Computing a NearOptimal Order of Matrix Multiplications
 LNCS # 621
, 1992
"... This paper considers the computation of matrix chain products of the form M1 \Theta M2 \Theta \Delta \Delta \Delta \Theta Mn\Gamma1 . The order in which the matrices are multiplied affects the number of operations. The best sequential algorithm for computing an optimal order of matrix multiplicatio ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
This paper considers the computation of matrix chain products of the form M1 \Theta M2 \Theta \Delta \Delta \Delta \Theta Mn\Gamma1 . The order in which the matrices are multiplied affects the number of operations. The best sequential algorithm for computing an optimal order of matrix multiplication runs in O(n log n) time while the best known parallel NC algorithm runs in O(log 2 n) time using n 6 = log 6 n processors. This paper presents the first approximating optimal parallel algorithm for this problem and for the problem of finding a nearoptimal triangulation of a convex polygon. The algorithm runs in O(log n) time using n= log n processors on a CREW PRAM, and in O(log log n) time using n= log log n processors on a weak CRCW PRAM. It produces an order of matrix multiplications and a partition of polygon which differ from the optimal ones at most 0.1547 times. 1 Introduction The problem of computing an optimal order of matrix multiplication (the matrix chain product proble...