Results 11  20
of
40
Towards overcoming the transitiveclosure bottleneck: efficient parallel algorithms for planar digraphs
 J. Comput. System Sci
, 1993
"... Abstract. Currently, there is a significant gap between the best sequential and parallel complexities of many fundamental problems related to digraph reachability. This complexity bottleneck essentially reflects a seemingly unavoidable reliance on transitive closure techniques in parallel algorithms ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract. Currently, there is a significant gap between the best sequential and parallel complexities of many fundamental problems related to digraph reachability. This complexity bottleneck essentially reflects a seemingly unavoidable reliance on transitive closure techniques in parallel algorithms for digraph reachability. To pinpoint the nature of the bottleneck, we de* velop a collection of polylogtime reductions among reachability problems. These reductions use only linear processors and work for general graphs. Furthermore, for planar digraphs, we give polylogtime algorithms for the following problems: (1) directed ear decomposition, (2) topological ordering, (3) digraph reachability, (4) descendent counting, and (5) depthfirst search. These algorithms use only linear processors and therefore reduce the complexity to within a polylog factor of optimal.
Simˇsa, J.: How to Order Vertices for Distributed LTL ModelChecking Based on Accepting Predecessors
 In: Proceedings of the 4th International Workshop on Parallel and Distributed Methods in verifiCation (PDMC 2005
, 2005
"... Distributed automatabased LTL modelchecking relies on algorithms for finding accepting cycles in a Büchi automaton. The approach to distributed accepting cycle detection as presented in [9] is based on maximal accepting predecessors. The ordering of accepting states (hence the maximality) is one o ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Distributed automatabased LTL modelchecking relies on algorithms for finding accepting cycles in a Büchi automaton. The approach to distributed accepting cycle detection as presented in [9] is based on maximal accepting predecessors. The ordering of accepting states (hence the maximality) is one of the main factors affecting the overall complexity of modelchecking as an imperfect ordering can enforce numerous reexplorations of the automaton. This paper addresses the problem of finding an optimal ordering, proves its hardness, and gives several heuristics for finding an optimal ordering in the distributed environment. We compare the heuristics both theoretically and experimentally to find out which of these work well. Key words: LTLmodel checking, Büchi automata, optimal ordering 1
A Theory Of Strict PCompleteness
 STACS 1992, in Lecture Notes in Computer Science 577
, 1992
"... . A serious limitation of the theory of Pcompleteness is that it fails to distinguish between those Pcomplete problems that do have polynomial speedup on parallel machines from those that don't. We introduce the notion of strict Pcompleteness and develop tools to prove precise limits on the possi ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
. A serious limitation of the theory of Pcompleteness is that it fails to distinguish between those Pcomplete problems that do have polynomial speedup on parallel machines from those that don't. We introduce the notion of strict Pcompleteness and develop tools to prove precise limits on the possible speedups obtainable for a number of Pcomplete problems. Key words. Parallel computation; Pcompleteness. Subject classifications. 68Q15, 68Q22. 1. Introduction A major goal of the theory of parallel computation is to understand how much speedup is obtainable in solving a problem on parallel machines over sequential machines. The theory of Pcompleteness has successfully classified many problems as unlikely to have polylog time algorithms on a parallel machine with a polynomial number of processors. However, the theory fails to distinguish between those Pcomplete problems that do have significant, polynomial speedup on parallel machines from those that don't. Yet this distinction is e...
A Model Classifying Algorithms as Inherently Sequential with Applications to Graph Searching
, 1992
"... A model is proposed that can be used to classify algorithms as inherently sequential. The model captures the internal computations of algorithms. Previous work in complexity theory has focused on the solutions algorithms compute. Direct comparison of algorithms within the framework of the model is ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
A model is proposed that can be used to classify algorithms as inherently sequential. The model captures the internal computations of algorithms. Previous work in complexity theory has focused on the solutions algorithms compute. Direct comparison of algorithms within the framework of the model is possible. The model is useful for identifying hard to parallelize constructs that should be avoided by parallel programmers. The model's utility is demonstrated via applications to graph searching. A stack breadthfirst search (BFS) algorithm is analyzed and proved inherently sequential. The proof technique used in the reduction is a new one. The result for stack BFS sharply contrasts a result showing that a queue based BFS algorithm is in NC. An NC algorithm to compute greedy depthfirst search numbers in a dag is presented, and a result proving that a combination search strategy called breadthdepth search is inherently sequential is also given.
Lazy DepthFirst Search and Linear Graph Algorithms in Haskell
 Glasgow Workshop on Functional Programming
, 1994
"... Depthfirst search is the key to a wide variety of graph algorithms. In this paper we explore the implementation of depth first search in a lazy functional language. For the first time in such languages we obtain a lineartime implementation. But we go further. Unlike traditional imperative presenta ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Depthfirst search is the key to a wide variety of graph algorithms. In this paper we explore the implementation of depth first search in a lazy functional language. For the first time in such languages we obtain a lineartime implementation. But we go further. Unlike traditional imperative presentations, algorithms are constructed from individual components, which may be reused to create new algorithms. Furthermore, the style of program is quite amenable to formal proof, which we exemplify through a calculationalstyle proof of a stronglyconnected components algorithm. 1 Introduction Graph algorithms have long been a challenge to programmers of lazy functional languages. It has not been at all clear how to express such algorithms without using side effects to achieve efficiency. For example, many texts provide implementations of search algorithms which are quadratic in the size of the graph (see Paulson (1991), Holyer (1991), or Harrison (1993)), compared with the standard linear im...
Combinatorial problems in solving linear systems
, 2009
"... Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today’s numerical linear algebra computations consists of the solution of sparse linear system by direct or iterative methods, we survey some combinatorial problems, ideas, and algorithms relating to these computations. On the direct methods side, we discuss issues such as matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factorization preconditioners, support graph preconditioners, and algebraic multigrid. In a separate part, we discuss the block triangular form of sparse matrices.
A DivideAndConquer Algorithm For Identifying Strongly Connected Components
"... . The standard serial algorithm for strongly connected components has linear complexity and is based on depth first search. Unfortunately, depth first search is difficult to parallelize. We describe a divideandconquer algorithm for this problem which has significantly greater potential for paral ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. The standard serial algorithm for strongly connected components has linear complexity and is based on depth first search. Unfortunately, depth first search is difficult to parallelize. We describe a divideandconquer algorithm for this problem which has significantly greater potential for parallelization. We show the expected serial running time of our algorithm to be O(jEj log jV j). We also present a variant of our algorithm that has O(jEj log jV j) worstcase complexity. Key words. Strongly connected components, divideandconquer, parallel algorithm, discrete ordinates method AMS subject classifications. 05C85, 05C38, 68W10, 68W20 1. Introduction. A strongly connected component of a directed graph is a maximal subset of vertices containing a directed path from each vertex to all others in the subset. The vertices of any directed graph can be partitioned into a set of disjoint strongly connected components. This decomposition is a fundamental tool in graph theory with app...
E.: Generating CounterExamples Through Randomized Guided Search
 In: SPIN 2007, LNCS
"... Abstract. Computational resources are increasing rapidly with the explosion of multicore processors readily available from major vendors. Model checking needs to harness these resources to help make it more effective in practical verification. Directed model checking uses heuristics in a guided sea ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. Computational resources are increasing rapidly with the explosion of multicore processors readily available from major vendors. Model checking needs to harness these resources to help make it more effective in practical verification. Directed model checking uses heuristics in a guided search to rank states in order of interest. Randomizing guided search makes it possible to harness computation nodes by running independent searches in parallel in a effort to discover counterexamples to correctness. Initial attempts at adding randomization to guided search have achieved very limited success. In this work, we present a new lowcost randomized guided search technique that shuffles states in the priority queue with equivalent heuristic ties. We show in an empirical study that randomized guided search, overall, decreases the number of states generated before error discovery when compared to a guided search using the same heuristic. To further evaluate the performance gains of randomized guided search using a particular heuristic, we compare it with randomized depthfirst search. Randomized depthfirst search shuffles transitions and generally improves error discovery over the default transition order implemented by the model checker. In the context of evaluating randomized guided search, a randomized depthfirst search provides a lower bound for establishing performance gains in directed model checking. In the empirical study, we show that with the correct heuristic, randomized guided search outperforms randomized depthfirst search both in effectively finding counterexamples and generating shorter counterexamples. 1
Planar Strong Connectivity Helps in Parallel DepthFirst Search
 SIAM Journal on Computing
, 1992
"... . This paper shows that for a strongly connected planar directed graph of size n, a depthfirst search tree rooted a specified vertex can be computed in O(log 5 n) time using n= log n processors. Previously, for planar directed graphs that may not be strongly connected, the best depthfirst searc ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. This paper shows that for a strongly connected planar directed graph of size n, a depthfirst search tree rooted a specified vertex can be computed in O(log 5 n) time using n= log n processors. Previously, for planar directed graphs that may not be strongly connected, the best depthfirst search algorithm runs in O(log 10 n) time using n processors. Both algorithms run on a parallel random access machine that allows concurrent reads and concurrent writes in its shared memory, and in case of a write conflict, permits an arbitrary processor to succeed. Key words. linearprocessor NC algorithms, graph separators, depthfirst search, planar directed graphs, strong connectivity, bubble graphs, st graphs AMS(MOS) subject classification. 68Q10, 05C99 1. Introduction. Depthfirst search is one of the most useful tools in graph theory [32], [4]. The depthfirst search problem is the following: given a graph and a distinguished vertex, construct a tree that corresponds to performing de...