Results 1  10
of
23
Optimal Parallel Algorithms for Periods, Palindromes and Squares (Extended Abstract)
, 1992
"... ) Alberto Apostolico Purdue University and Universit`a di Padova Dany Breslauer yyz Columbia University Zvi Galil z Columbia University and TelAviv University Summary of results Optimal concurrentread concurrentwrite parallel algorithms for two problems are presented: ffl Finding all the pe ..."
Abstract

Cited by 32 (13 self)
 Add to MetaCart
) Alberto Apostolico Purdue University and Universit`a di Padova Dany Breslauer yyz Columbia University Zvi Galil z Columbia University and TelAviv University Summary of results Optimal concurrentread concurrentwrite parallel algorithms for two problems are presented: ffl Finding all the periods of a string. The period of a string can be computed by previous efficient parallel algorithms only if it is shorter than half of the length of the string. Our new algorithm computes all the periods in optimal O(log log n) time, even if they are longer. The algorithm can be used to compute all initial palindromes of a string within the same bounds. ffl Testing if a string is squarefree. We present an optimal O(log log n) time algorithm for testing if a string is squarefree, improving the previous bound of O(log n) given by Apostolico [1] and Crochemore and Rytter [12]. We show matching lower bounds for the optimal parallel algorithms that solve the problems above on a general alphab...
Optimally Fast Parallel Algorithms for Preprocessing and Pattern Matching in One and Two Dimensions
, 1993
"... All algorithms below are optimal alphabetindependent parallel CRCW PRAM algorithms. In one dimension: Given a pattern string of length m for the stringmatching problem, we design an algorithm that computes a deterministic sample of a sufficiently long substring in constant time. This problem use ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
All algorithms below are optimal alphabetindependent parallel CRCW PRAM algorithms. In one dimension: Given a pattern string of length m for the stringmatching problem, we design an algorithm that computes a deterministic sample of a sufficiently long substring in constant time. This problem used to be a bottleneck in the pattern preprocessing for one and twodimensional pattern matching. The best previous time bound was O(log 2 m= log log m). We use this algorithm to obtain the following results. 1. Improving the preprocessing of the constanttime text search algorithm [12] from O(log 2 m= log log m) to O(log log m), which is now best possible. 2. A constanttime deterministic stringmatching algorithm in the case that the text length n satisfies n = \Omega\Gamma m 1+ffl ) for a constant ffl ? 0. 3. A simple probabilistic stringmatching algorithm that has constant time with high probability for random input. 4. A constant expected time LasVegas algorithm for computing t...
Finding All Periods and Initial Palindromes of a String in Parallel

, 1991
"... An optimal O(log log n) time CRCWPRAM algorithm for computing all periods of a string is presented. Previous parallel algorithms compute the period only if it is shorter than half of the length of the string. This algorithm can be used to find all initial palindromes of a string in the same tim ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
An optimal O(log log n) time CRCWPRAM algorithm for computing all periods of a string is presented. Previous parallel algorithms compute the period only if it is shorter than half of the length of the string. This algorithm can be used to find all initial palindromes of a string in the same time and processor bounds. Both algorithms are the fastest possible over a general alphabet. We derive a lower bound for finding palindromes by a modification of a previously known lower bound for finding the period of a string [3]. When p processors are available the bounds become \Theta(d n p e + log log d1+p=ne 2p).
Structural Parallel Algorithmics
, 1991
"... The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledgebase on nonnumerical parallel algorithms can be characterized in a structural way ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledgebase on nonnumerical parallel algorithms can be characterized in a structural way. Each structure relates a few problems and technique to one another from the basic to the more involved. The second half of the paper provides a bird'seye view of such structures for: (1) list, tree and graph parallel algorithms; (2) very fast deterministic parallel algorithms; and (3) very fast randomized parallel algorithms. 1 Introduction Parallelism is a concern that is missing from "traditional" algorithmic design. Unfortunately, it turns out that most efficient serial algorithms become rather inefficient parallel algorithms. The experience is that the design of parallel algorithms requires new paradigms and techniques, offering an exciting intellectual challenge. We note that it had...
An Optimal O(log log n) Time Parallel Algorithm for Detecting all Squares in a String
, 1995
"... An optimal O(log log n) time concurrentread concurrentwrite parallel algorithm for detecting all squares in a string is presented. A tight lower bound shows that over general alphabets this is the fastest possible optimal algorithm. When p processors are available the bounds become \Theta(d n ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
An optimal O(log log n) time concurrentread concurrentwrite parallel algorithm for detecting all squares in a string is presented. A tight lower bound shows that over general alphabets this is the fastest possible optimal algorithm. When p processors are available the bounds become \Theta(d n log n p e + log log d1+p=ne 2p). The algorithm uses an optimal parallel stringmatching algorithm together with periodicity properties to locate the squares within the input string.
Testing String Superprimitivity in Parallel
 Information Processing Letters
, 1992
"... A string w covers another string z if every symbol of z is within some occurrence of w in z. A string is called superprimitive if it is covered only by itself, and quasiperiodic if it is covered by some shorter string. This paper presents an O(log log n) time n log n log log n processor CRCW ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
A string w covers another string z if every symbol of z is within some occurrence of w in z. A string is called superprimitive if it is covered only by itself, and quasiperiodic if it is covered by some shorter string. This paper presents an O(log log n) time n log n log log n processor CRCWPRAM algorithm that tests if a string is superprimitive. The algorithm is the fastest possible with this number of processors over a general alphabet. 1 Introduction Quasiperiodicity, as defined by Apostolico and Ehrenfeucht [3], is an avoidable regularity of strings that is strongly related to other regularities such as periods and squares [12]. Apostolico, Farach and Iliopoulos [4] and Breslauer [7] gave lineartime sequential algorithms that tests if a string is superprimitive. Apostolico and Ehrenfeucht [3] presented an algorithm that finds all maximal quasiperiodic substrings of a string. This paper presents a parallel algorithm that tests if a string of length n is superprimitive i...
Fast Parallel String PrefixMatching
 Theoret. Comput. Sci
, 1992
"... An O(log log m) time n log m log log m processor CRCWPRAM algorithm for the string prefixmatching problem over a general alphabet is presented. The algorithm can also be used to compute the KMP failure function in O(log log m) time on m log m log log m processors. These results improve on th ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
An O(log log m) time n log m log log m processor CRCWPRAM algorithm for the string prefixmatching problem over a general alphabet is presented. The algorithm can also be used to compute the KMP failure function in O(log log m) time on m log m log log m processors. These results improve on the running time of the best previous algorithm for both problems, which was O(log m), while preserving the same number of operations. 1 Introduction String matching is the problem of finding all occurrences of a short pattern string P[1::m] in a longer text string T [1::n]. The classical sequential algorithm of Knuth, Morris and Pratt [12] solves the string matching problem in time that is linear in the length of the input strings. The KnuthMorrisPratt [12] string matching algorithm can be easily generalized to find the longest pattern prefix that starts at each text position within the same time bound. We refer to this problem as string prefixmatching. In parallel, the string matching p...
Parallel Two Dimensional Witness Computation
, 2001
"... An optimal parallel CRCWPRAM algorithm to compute witnesses for all nonperiod vectors of an m 1 m 2 pattern is given. The algorithm takes O(log log m) time and does O(m 1 m 2 ) work, where m = maxfm 1 ; m 2 g. This yields a work optimal algorithm for 2D pattern matching which takes O(log log m) ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
An optimal parallel CRCWPRAM algorithm to compute witnesses for all nonperiod vectors of an m 1 m 2 pattern is given. The algorithm takes O(log log m) time and does O(m 1 m 2 ) work, where m = maxfm 1 ; m 2 g. This yields a work optimal algorithm for 2D pattern matching which takes O(log log m) preprocessing time and O(1) text processing time.
The Random Adversary: A LowerBound Technique For Randomized Parallel Algorithms
 in Proc. of the 3rd SODA (ACM
, 1997
"... . The randomadversary technique is a general method for proving lower bounds on randomized parallel algorithms. The bounds apply to the number of communication steps, and they apply regardless of the processors' instruction sets, the lengths of messages, etc. This paper introduces the ra ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
.<F3.82e+05> The randomadversary technique is a general method for proving lower bounds on randomized parallel algorithms. The bounds apply to the number of communication steps, and they apply regardless of the processors' instruction sets, the lengths of messages, etc. This paper introduces the randomadversary technique and shows how it can be used to obtain lower bounds on randomized parallel algorithms for load balancing, compaction, padded sorting, and finding Hamiltonian cycles in random graphs. Using the randomadversary technique, we obtain the first lower bounds for randomized parallel algorithms which are provably faster than their deterministic counterparts (specifically, for load balancing and related problems).<F4.005e+05> Key words.<F3.82e+05> parallel algorithms, parallel computation, PRAM model, randomized parallel algorithms, expected time, lower bounds, load balancing<F4.005e+05> AMS subject classifications.<F3.82e+05> 68Q10, 68Q22, 68Q25<F4.005e+05> PII.<F3.82e+05> ...
Transforming comparison model lower bounds to the parallelrandomaccessmachine
 INFORMATION PROCESSING LETTERS
, 1997
"... We provide general transformations of lower bounds in Valiant's parallelcomparisondecisiontree model to lower bounds in the priority concurrentread concurrentwrite parallelrandomaccessmachine model. The proofs rely on standard Ramseytheoretic arguments that simplify the structure of the com ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We provide general transformations of lower bounds in Valiant's parallelcomparisondecisiontree model to lower bounds in the priority concurrentread concurrentwrite parallelrandomaccessmachine model. The proofs rely on standard Ramseytheoretic arguments that simplify the structure of the computation by restricting the input domain. The transformation of comparison model lower bounds, which are usually easier to obtain, to the parallelrandomaccessmachine, unifies some known lower bounds and gives new lower bounds for several problems.