Results 1  10
of
10
Optimal and Sublogarithmic Time Randomized Parallel Sorting Algorithms
 SIAM Journal on Computing
, 1989
"... .We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first kno ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
.We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first known that is optimal: the product of its time and processor bounds is upper bounded by a linear function of the input size. We also give a deterministic sublogarithmic time algorithm for prefix sum. In addition we present a sublogarithmic time algorithm for obtaining a random permutation of n elements in parallel. And finally, we present sublogarithmic time algorithms for GENERAL SORT and INTEGER SORT. Our sublogarithmic GENERAL SORT algorithm is also optimal. Key words. Randomized algorithms, parallel sorting, parallel random access machines, random permutations, radix sort, prefix sum, optimal algorithms. AMS(MOS) subject classifications. 68Q25. 1 A preliminary version of this paper ...
Graph partitioning into isolated, high conductance clusters: theory, computation and . . .
, 2008
"... ..."
Fast Generation of Random Permutations via Networks Simulation
, 1998
"... We consider the classical problem of generating random permutations with the uniform distribution. That is, we require that for an arbitrary permutation ß of n elements, with probability 1=n! the machine halts with the ith output cell containing ß(i), for 1 i n. We study this problem on two models ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We consider the classical problem of generating random permutations with the uniform distribution. That is, we require that for an arbitrary permutation ß of n elements, with probability 1=n! the machine halts with the ith output cell containing ß(i), for 1 i n. We study this problem on two models of parallel computations: the CREW PRAM and the EREW PRAM. The main result of the paper is an algorithm for generating random permutations that runs in O(log log n) time and uses O(n 1+o(1) ) processors on the CREW PRAM. This is the first o(log n)time CREW PRAM algorithm for this problem. On the EREW PRAM we present a simple algorithm that generates a random permutation in time O(log n) using n processors and O(n) space. This algorithm matches the running time and the number of processors used of the best previously known algorithms for the CREW PRAM, and performs better as far as the memory usage is considered. The common and novel feature of both our algorithms is to design first a s...
An Optimal Parallel Matching Algorithm for Cographs
 Journal of Parallel and Distributed Computing
, 1994
"... The class of cographs, or complementreducible graphs, arises naturally in many different areas of applied mathematics and computer science. We show that the problem of finding a maximum matching in a cograph can be solved optimally in parallel by reducing it to parenthesis matching. With an $n$ver ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The class of cographs, or complementreducible graphs, arises naturally in many different areas of applied mathematics and computer science. We show that the problem of finding a maximum matching in a cograph can be solved optimally in parallel by reducing it to parenthesis matching. With an $n$vertex cograph $G$ represented by its parse tree as input, our algorithm finds a maximum matching in $G$ in O($logn$) time using O($n0$) processors in the EREWPRAM model. Key Words: list ranking, tree contraction, matching, parenthesis matching, scheduling, operating systems, cographs, parallel algorithms, EREWPRAM. 1. Introduction A wellknown class of graphs arising in a wide spectrum of practical applications [1,2,7] is the class of cographs, or complementreducible graphs. The cographs are defined recursively as follows: . a singlevertex graph is a cograph; . if $G$ is a cograph, then its complement $G bar$ is also a cograph; . if $G$ and $H$ are cographs, then their union is also a cog...
Planar Strong Connectivity Helps in Parallel DepthFirst Search
 SIAM Journal on Computing
, 1992
"... . This paper shows that for a strongly connected planar directed graph of size n, a depthfirst search tree rooted a specified vertex can be computed in O(log 5 n) time using n= log n processors. Previously, for planar directed graphs that may not be strongly connected, the best depthfirst searc ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. This paper shows that for a strongly connected planar directed graph of size n, a depthfirst search tree rooted a specified vertex can be computed in O(log 5 n) time using n= log n processors. Previously, for planar directed graphs that may not be strongly connected, the best depthfirst search algorithm runs in O(log 10 n) time using n processors. Both algorithms run on a parallel random access machine that allows concurrent reads and concurrent writes in its shared memory, and in case of a write conflict, permits an arbitrary processor to succeed. Key words. linearprocessor NC algorithms, graph separators, depthfirst search, planar directed graphs, strong connectivity, bubble graphs, st graphs AMS(MOS) subject classification. 68Q10, 05C99 1. Introduction. Depthfirst search is one of the most useful tools in graph theory [32], [4]. The depthfirst search problem is the following: given a graph and a distinguished vertex, construct a tree that corresponds to performing de...
Systematic Derivation of Tree Contraction Algorithms
 In Proceedings of INFOCOM '90
, 2005
"... While tree contraction algorithms play an important role in e#cient tree computation in parallel, it is di#cult to develop such algorithms due to the strict conditions imposed on contracting operators. In this paper, we propose a systematic method of deriving e#cient tree contraction algorithms f ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
While tree contraction algorithms play an important role in e#cient tree computation in parallel, it is di#cult to develop such algorithms due to the strict conditions imposed on contracting operators. In this paper, we propose a systematic method of deriving e#cient tree contraction algorithms from recursive functions on trees in any shape. We identify a general recursive form that can be parallelized to obtain e#cient tree contraction algorithms, and present a derivation strategy for transforming general recursive functions to parallelizable form. We illustrate our approach by deriving a novel parallel algorithm for the maximum connectedset sum problem on arbitrary trees, the treeversion of the famous maximum segment sum problem.
Parallel Maximum Independent Set In Convex Bipartite Graphs
, 1996
"... A bipartite graph G = (V; W;E) is called convex if the vertices in W can be ordered in such a way that the elements of W adjacent to any vertex v 2 V form an interval (i.e. a sequence consecutively numbered vertices). Such a graph can be represented in a compact form that requires O(n) space, where ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A bipartite graph G = (V; W;E) is called convex if the vertices in W can be ordered in such a way that the elements of W adjacent to any vertex v 2 V form an interval (i.e. a sequence consecutively numbered vertices). Such a graph can be represented in a compact form that requires O(n) space, where n = maxfjV j; jW jg. Given a convex bipartite graph G in the compact form Dekel and Sahni designed an O(log 2 (n))time, nprocessor EREW PRAM algorithm to compute a maximum matching in G. We show that the matching produced by their algorithm can be used to construct optimally in parallel a maximum set of independent vertices. Our algorithm runs in O(logn) time with n=logn processors on a CRCW PRAM. Keywords: bipartite graphs, convex graphs, independent set, PRAM algorithms. 1. Introduction An independent set of a graph is a subset of its vertices such that no two vertices in the subset are adjacent. The problem of finding a maximum cardinality independent set (or shortly, the MIS prob...
© 1998 SpringerVerlag New York Inc. Fast Generation of Random Permutations Via Networks Simulation 1
"... Abstract. We consider the problem of generating random permutations with uniform distribution. That is, we require that for an arbitrary permutation π of n elements, with probability 1/n! the machine halts with the ith output cell containing π(i), for 1 ≤ i ≤ n. We study this problem on two models o ..."
Abstract
 Add to MetaCart
Abstract. We consider the problem of generating random permutations with uniform distribution. That is, we require that for an arbitrary permutation π of n elements, with probability 1/n! the machine halts with the ith output cell containing π(i), for 1 ≤ i ≤ n. We study this problem on two models of parallel computations: the CREW PRAM and the EREW PRAM. The main result of the paper is an algorithm for generating random permutations that runs in O(log log n) time and uses O(n1+o(1) ) processors on the CREW PRAM. This is the first o(log n)time CREW PRAM algorithm for this problem. On the EREW PRAM we present a simple algorithm that generates a random permutation in time O(log n) using n processors and O(n) space. This algorithm outperforms each of the previously known algorithms for the exclusive write PRAMs. The common and novel feature of both our algorithms is first to design a suitable random switching network generating a permutation and then to simulate this network on the PRAM model in a fast way.
THE UNIVERSITY OF CHICAGO ADAPTIVE INFERENCE FOR GRAPHICAL MODELS A DISSERTATION SUBMITTED TO THE FACULTY OF THE DIVISION OF THE PHYSICAL SCIENCES IN CANDIDACY FOR THE DEGREE OF
"... Many algorithms and applications involve repeatedly solving a variation of the same statistical inference problem. Adaptive inference is a technique where the previous computations are leveraged to speed up the computations after modifying the model parameters. This approach is useful in situations ..."
Abstract
 Add to MetaCart
Many algorithms and applications involve repeatedly solving a variation of the same statistical inference problem. Adaptive inference is a technique where the previous computations are leveraged to speed up the computations after modifying the model parameters. This approach is useful in situations where a slowtocompute statistical model needs to be rerun after some minor manual changes or in situations where the model is changing over time in minor ways; for example while studying the e ects of mutations on proteins, one often constructs models that change slowly as mutations are introduced. Another important application of adaptive inference is in situations where the model is being used iteratively; for example in approximate inference we may want to decompose the problem into simpler inference subproblems that are solved repeatedly and iteratively using adaptive updates. In this thesis we explore both exact inference and iterative approximate inference approaches using adaptive updates. We rst present algorithms for adaptive exact inference on general graphs that can be used to e ciently compute marginals and update MAP con gurations under arbitrary changes to the input factor graph and its associated elimination tree. We then apply them to approximate inference using a framework called dual decomposition.
Proceedings of the TwentyFifth AAAI Conference on Artificial Intelligence Fast Parallel and Adaptive Updates for DualDecomposition Solvers
"... Dualdecomposition (DD) methods are quickly becoming important tools for estimating the minimum energy state of a graphical model. DD methods decompose a complex model into a collection of simpler subproblems that can be solved exactly (such as trees), that in combination provide upper and lower bou ..."
Abstract
 Add to MetaCart
Dualdecomposition (DD) methods are quickly becoming important tools for estimating the minimum energy state of a graphical model. DD methods decompose a complex model into a collection of simpler subproblems that can be solved exactly (such as trees), that in combination provide upper and lower bounds on the exact solution. Subproblem choice can play a major role: larger subproblems tend to improve the bound more per iteration, while smaller subproblems enable highly parallel solvers and can benefit from reusing past solutions when there are few changes between iterations. We propose an algorithm that can balance many of these aspects to speed up convergence. Our method uses a cluster tree data structure that has been proposed for adaptive exact inference tasks, and we apply it in this paper to dualdecomposition approximate inference. This approach allows us to process large subproblems to improve the bounds at each iteration, while allowing a high degree of parallelizability and taking advantage of subproblems with sparse updates. For both synthetic inputs and a realworld stereo matching problem, we demonstrate that our algorithm is able to achieve significant improvement in convergence time. 1