Results 1 
7 of
7
Towards overcoming the transitiveclosure bottleneck: efficient parallel algorithms for planar digraphs
 J. Comput. System Sci
, 1993
"... Abstract. Currently, there is a significant gap between the best sequential and parallel complexities of many fundamental problems related to digraph reachability. This complexity bottleneck essentially reflects a seemingly unavoidable reliance on transitive closure techniques in parallel algorithms ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Currently, there is a significant gap between the best sequential and parallel complexities of many fundamental problems related to digraph reachability. This complexity bottleneck essentially reflects a seemingly unavoidable reliance on transitive closure techniques in parallel algorithms for digraph reachability. To pinpoint the nature of the bottleneck, we de* velop a collection of polylogtime reductions among reachability problems. These reductions use only linear processors and work for general graphs. Furthermore, for planar digraphs, we give polylogtime algorithms for the following problems: (1) directed ear decomposition, (2) topological ordering, (3) digraph reachability, (4) descendent counting, and (5) depthfirst search. These algorithms use only linear processors and therefore reduce the complexity to within a polylog factor of optimal.
Nested dissection: A survey and comparison of various nested dissection algorithms
, 1992
"... Methods for solving sparse linear systems of equations can be categorized under two broad classes direct and iterative. Direct methods are methods based on gaussian elimination. This report discusses one such direct method namely Nested dissection. Nested Dissection, originally proposed by Alan Geo ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Methods for solving sparse linear systems of equations can be categorized under two broad classes direct and iterative. Direct methods are methods based on gaussian elimination. This report discusses one such direct method namely Nested dissection. Nested Dissection, originally proposed by Alan George, is a technique for solving sparse linear systems efficiently. This report is a survey of some of the work in the area of nested dissection and attempts to put it together using a common framework.
A Dynamic Separator Algorithm
 IN PROC. 3RD WORKSH. ALGORITHMS AND DATA STRUCTURES
, 1993
"... Our work is based on the pioneering work in sphere separators done by Miller, Teng, Vavasis et al, [8, 12], who gave efficient static (fixed input) algorithms for finding sphere separators of size s(n) = O(n d ) for a set of points in R . We present ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Our work is based on the pioneering work in sphere separators done by Miller, Teng, Vavasis et al, [8, 12], who gave efficient static (fixed input) algorithms for finding sphere separators of size s(n) = O(n d ) for a set of points in R . We present
Abstract On Gazit and Miller’s parallel algorithm for planar separators: achieving
"... greater efficiency through random sampling We show how to obtain a workefficient parallel algorithm for finding a planar separator. The algorithm requires O(rz ’) time for any given positive constant e. 1 ..."
Abstract
 Add to MetaCart
greater efficiency through random sampling We show how to obtain a workefficient parallel algorithm for finding a planar separator. The algorithm requires O(rz ’) time for any given positive constant e. 1
Fast and Efficient Linear Programming and Linear LeastSquares Computations
, 1986
"... We present a new parallel algorithm for computing a leastsquares solution to a sparse overdetermined system of linear equations x  b such that the m x n matrix is sparse and the graph, G = (V, E), of the matrix has an s(m +n)separator family, i.e. either IV[ <n o for a fixed constant n o or, ..."
Abstract
 Add to MetaCart
(Show Context)
We present a new parallel algorithm for computing a leastsquares solution to a sparse overdetermined system of linear equations x  b such that the m x n matrix is sparse and the graph, G = (V, E), of the matrix has an s(m +n)separator family, i.e. either IV[ <n o for a fixed constant n o or, by deleting a separator subset S of vertices of size <s(m +n), G can be partitioned into two disconnected subgraphs having vertex sets Vt, V., of size < 2.3 (m + n), and each of the two resulting subgraphs induced by the vertex sets S U V,, i = 1, 2, can be recursively s (] S U V,I )separated in a similar way. Our algorithm uses O (log (m + n) loft s (m + n)) steps and < s 3(m + n) processors; it relies on our recent parallel algorithm for solving sparse linear systems and has several immediate applications of interest, in particular to mathematical programming, to sparse nonsymmetric systems of linear equations and to the path algebra computations. We most closely examine the impact on the linear programming problem (LPP) which requires maximizing cry subject to Vy < b, y > 0, where is an m x n matrix. Hereafter it is assumed that m > n. The recent algorithm by Karmarkar gives the bestknown upper estimate [O (m 3 SL ) arithmetic operations, where L is the input size] for the cost of the solution of this problem in the worst case. We prove an asymptotic improvement of that result in the case where the graph of the associated matrix H has an s (m + n)separator family; then our algorithm can be implemented using O (mL log m log' s (m + n)) parallel arithmetic steps, s3(m + n) processors and a total of O (mL3(m + n) log m log a s (m + n)) arithmetic operations. In many cases of practical importance this is a considerable improvement on the known estimates: for example, s (m + ...
unknown title
"... particular, solving a linear system ~,x = b in the usual sense is a simplification of the LLSP where the output is either the answer that or, otherwise, a vector x * such that minll&xbli>0 x &x * b = 0. The first objective of this paper is to reexamine the time complexity of the LLSP an ..."
Abstract
 Add to MetaCart
(Show Context)
particular, solving a linear system ~,x = b in the usual sense is a simplification of the LLSP where the output is either the answer that or, otherwise, a vector x * such that minll&xbli>0 x &x * b = 0. The first objective of this paper is to reexamine the time complexity of the LLSP and to indicate the possibility of speeding up its solution using the parallel algorithms of Ref. [2] combined with the techniques of blowup transformations and variable diagonals and with the ShermanMorrisonWoodbury formula. As a major consequence [which may become decisive in determining the best algorithm for the linear programming problem (LPP), at least over some important classes of instances of that problem], we will substantially speed up Karmarkar's algorithm [3] for the LPP, because solving the LLSP constitutes the most costly part of every iteration of that algorithm. Furthermore, we will modify Karmarkar's algorithm and solve an LPP with a dense m × n input matrix using
A DivideandConquer Approach to Shortest Paths in Planar Layered Digraphs
, 1992
"... Computing shortest paths in a directed graph has received considerable attention in the sequential RAM model of computation. However, developing a polylogtime parallel algorithm that is close to the sequential optimal in terms of the total work done remains an elusive goal. We present a first step ..."
Abstract
 Add to MetaCart
Computing shortest paths in a directed graph has received considerable attention in the sequential RAM model of computation. However, developing a polylogtime parallel algorithm that is close to the sequential optimal in terms of the total work done remains an elusive goal. We present a first step in this direction by showing that for an nnode planar layered digraph with nonnegative edgeweights the shortest path between any two vertices can be computed in O(log³ n) time with n processors in a CREW PRAM. A CRCW version of our algorithm runs in time O(log² n log log n) and uses n log n/log log n processors. Our results make use of the existence of special kinds of separators in planar layered digraphs, called oneway separators, to implement a divide and conquer solution.