Results 1  10
of
17
Applying parallel computation algorithms in the design of serial algorithms
 J. ACM
, 1983
"... Abstract. The goal of this paper is to point out that analyses of parallelism in computational problems have practical implications even when multiprocessor machines are not available. This is true because, in many cases, a good parallel algorithm for one problem may turn out to be useful for design ..."
Abstract

Cited by 234 (7 self)
 Add to MetaCart
Abstract. The goal of this paper is to point out that analyses of parallelism in computational problems have practical implications even when multiprocessor machines are not available. This is true because, in many cases, a good parallel algorithm for one problem may turn out to be useful for designing an efficient serial algorithm for another problem. A d ~ eframework d for cases like this is presented. Particular cases, which are discussed in this paper, provide motivation for examining parallelism in sorting, selection, minimumspanningtree, shortest route, maxflow, and matrix multiplication problems, as well as in scheduling and locational problems.
merging, and sorting in parallel models of computation
 in “Proc. 14th Annual ACM Sympos. on Theory of Cornput
, 1982
"... A variety of models have been proposed for the study of synchronous parallel computation. These models are reviewed and some prototype problems are studied further. Two classes of models are recognized, fixed connection networks and models based on a shared memory. Routing and sorting are prototype ..."
Abstract

Cited by 105 (3 self)
 Add to MetaCart
A variety of models have been proposed for the study of synchronous parallel computation. These models are reviewed and some prototype problems are studied further. Two classes of models are recognized, fixed connection networks and models based on a shared memory. Routing and sorting are prototype problems for the networks; in particular, they provide the basis for simulating the more powerful shared memory models. It is shown that a simple but important class of deterministic strategies (oblivious routing) is necessarily inefficient with respect to worst case analysis. Routing can be viewed as a special case of sorting, and the existence of an O(log n) sorting algorithm for some n processor fixed connection network has only recently been established by Ajtai, Komlos, and Szemeredi (“15th ACM Sympos. on Theory of Cornput., ” Boston, Mass., 1983, pp. l9). If the more powerful class of shared memory models is considered then it is possible to simply achieve an O(log n loglog n) sort via Valiant’s parallel merging algorithm, which it is shown can be implemented on certain models. Within a spectrum of shared memory models, it is shown that loglogn is asymptotically optimal for n processors to merge two sorted lists containing n elements. 0 1985 Academic Press, Inc.
The Constrained Minimum Spanning Tree Problem (Extended Abstract)
"... Given an undirected graph with two different nonnegative costs associated with every edge e (say, we for the weight and le for the length of edge e) and a budget L, consider the problem of finding a spanning tree of total edge length at most L and minimum total weight under this restriction. This co ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Given an undirected graph with two different nonnegative costs associated with every edge e (say, we for the weight and le for the length of edge e) and a budget L, consider the problem of finding a spanning tree of total edge length at most L and minimum total weight under this restriction. This constrained minimum spanning tree problem is weakly NPhard. We present a polynomialtime approximation scheme for this problem. This algorithm always produces a spanning tree of total length at most (1 + e)L and of total weight at most that of any spanning tree of total length at most L, for any fixed e> 0. The algorithm uses Lagrangean relaxation, and exploits adjacency relations for matroids.
Derivation of Randomized Sorting and Selection Algorithms, in Parallel Algorithm Derivation And Program Transformation, edited by
, 1993
"... In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are ..."
Abstract

Cited by 22 (18 self)
 Add to MetaCart
In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are the most efficient known. From among other results, we have an efficient algorithm for sequential sorting. The problem of sorting has attracted so much attention because of its vital importance. Sorting with as few comparisons as possible while keeping the storage size minimum is a long standing open problem. This problem is referred to as ‘the minimum storage sorting ’ [10] in the literature. The previously best known minimum storage sorting algorithm is due to Frazer and McKellar [10]. The expected number of comparisons made by this algorithm is n log n + O(n log log n). The algorithm we derive in this paper makes only an expected n log n + O(n ω(n)) number of comparisons, for any function ω(n) that tends to infinity. A variant of this algorithm makes no more than n log n + O(n log log n) comparisons on any input of size n with overwhelming probability. We also prove high probability bounds for several randomized algorithms for which only expected bounds have been proven so far.
Sorting and Selection on Interconnection Networks
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1995
"... ABSTRACT. In this paper we identify techniques that havebeen employed in the design of sorting and selection algorithms for various interconnection networks. We consider both randomized and deterministic techniques. Interconnection Networks of interest include the mesh, the mesh with xed and recon g ..."
Abstract

Cited by 21 (15 self)
 Add to MetaCart
ABSTRACT. In this paper we identify techniques that havebeen employed in the design of sorting and selection algorithms for various interconnection networks. We consider both randomized and deterministic techniques. Interconnection Networks of interest include the mesh, the mesh with xed and recon gurable buses, the hypercube family, and the star graph. For the sake of comparisons, we also list PRAM algorithms. 1
Structural Parallel Algorithmics
, 1991
"... The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledgebase on nonnumerical parallel algorithms can be characterized in a structural way ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
The first half of the paper is a general introduction which emphasizes the central role that the PRAM model of parallel computation plays in algorithmic studies for parallel computers. Some of the collective knowledgebase on nonnumerical parallel algorithms can be characterized in a structural way. Each structure relates a few problems and technique to one another from the basic to the more involved. The second half of the paper provides a bird'seye view of such structures for: (1) list, tree and graph parallel algorithms; (2) very fast deterministic parallel algorithms; and (3) very fast randomized parallel algorithms. 1 Introduction Parallelism is a concern that is missing from "traditional" algorithmic design. Unfortunately, it turns out that most efficient serial algorithms become rather inefficient parallel algorithms. The experience is that the design of parallel algorithms requires new paradigms and techniques, offering an exciting intellectual challenge. We note that it had...
Parametric search made practical
 SoCG: 18th Symposium on Computational Geometry
, 2002
"... In this paper we show that in sortingbased applications of parametric search, Quicksort can replace the parallel sorting algorithms that are usually advocated, and we argue that Cole’s optimization of certain parametricsearch algorithms may be unnecessary under realistic assumptions about the inpu ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
In this paper we show that in sortingbased applications of parametric search, Quicksort can replace the parallel sorting algorithms that are usually advocated, and we argue that Cole’s optimization of certain parametricsearch algorithms may be unnecessary under realistic assumptions about the input. Furthermore, we present a generic, flexible, and easytouse framework that greatly simplifies the implementation of algorithms based on parametric search. We use our framework to implement an algorithm that solves the Fréchetdistance problem. The implementation based on parametric search is faster than the binarysearch approach that is often suggested as a practical replacement for the parametricsearch technique.
New lower bounds for parallel computation
 In Proceedings of the 18 th Annual ACM Symposium on Theory of Computing
, 1986
"... Abstract. Lower bounds are proven on the paralleltime complexity of several basic functions on the most powerful concurrentread concurrentwrite PRAM with unlimited shared memory and unlimited power of individual processors (denoted by PRIORITY(m)): (1) It is proved that with a number of processor ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. Lower bounds are proven on the paralleltime complexity of several basic functions on the most powerful concurrentread concurrentwrite PRAM with unlimited shared memory and unlimited power of individual processors (denoted by PRIORITY(m)): (1) It is proved that with a number of processors polynomial in n, fi(log n) time is needed for addition, multiplication or bitwise OR of n numbers, when each number has II ’ bits. Hence even the bit complexity (i.e., the time complexity as a function of the total number of bits in the input) is logarithmic in this case. This improves a beautiful result of Meyer auf der Heide and Wigderson [22]. They proved a log n lower bound using Ramseytype techniques. Using Ramsey theory, it is possible to get an upper bound on the number of bits in the inputs used. However, for the case of polynomially many processors, this upper bound is more than a polynomial in n. (2) An R(log n) lower bound is given for PRIORITY(m) with no” ’ processors on a function with inputs from (0, 11, namely for the functionf(xl,.,x.) = C:‘=, x,a ’ where a is fixed and x, E (0, 1). (3) Finally, by a new efficient simulation of PRIORITY(m) by unbounded fanin circuits, that with less than exponential number of processors, it is proven a PRIORITY(m) cannot compute PARITY in constant time, and with nO” ’ processors Q(G) time is needed. The simulation technique is of
Optimal Facility Location under Various Distance Functions
"... We present efficient algorithms for two problems of facility location. In both problems we want to determine the location of a single facility with respect to n given sites. In the first we seek a location that maximizes a weighted distance function between the facility and the sites, and in the ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We present efficient algorithms for two problems of facility location. In both problems we want to determine the location of a single facility with respect to n given sites. In the first we seek a location that maximizes a weighted distance function between the facility and the sites, and in the second we find a location that minimizes the sum (or sum of the squares) of the distances of k of the sites from the facility. 1. Introduction Facility location is a classical problem of operations research that has also been examined in the computational geometry community. The task is to position a point in the plane (the facility) such that a distance between the facility and given points (sites) is minimized or maximized. Most of the problems described in the facility location literature are concerned with finding a "desirable" facility location: the goal is to minimize a distance function between the facility (e.g., a service) and the sites (e.g., the customers). Just as important i...
WorkTimeOptimal Parallel Algorithms for String Problems (Extended Abstract)
 In Proc. 27th ACM Symp. on the Theory of Computing
, 1995
"... ) Artur Czumaj Zvi Galil y Leszek G¸asieniec z Kunsoo Park x Wojciech Plandowski  Abstract A parallel algorithm is workoptimal if it uses the smallest possible work; a workoptimal algorithm is worktime optimal if it also uses the smallest possible time. We design worktimeoptimal al ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
) Artur Czumaj Zvi Galil y Leszek G¸asieniec z Kunsoo Park x Wojciech Plandowski  Abstract A parallel algorithm is workoptimal if it uses the smallest possible work; a workoptimal algorithm is worktime optimal if it also uses the smallest possible time. We design worktimeoptimal algorithm for a number of string processing problems on the EREWPRAM and the hypercube. They include string matching and two dimensional pattern matching. No such algorithms have been known before for any of these problems. 1 Introduction We call a parallel algorithm workoptimal if it has smallest possible work. Notice that this definition is stricter than the one requiring only the same work as the best known sequential algorithm and it requires proving a lower bound. In most cases workoptimality means either linear work or O(n log n) work because no higher lower bounds are known. We call a workoptimal algo Heinz Nixdorf Institute, University of Paderborn, D33095 Paderborn, Germany....