Results 1  10
of
21
Applying parallel computation algorithms in the design of serial algorithms
 J. ACM
, 1983
"... Abstract. The goal of this paper is to point out that analyses of parallelism in computational problems have practical implications even when multiprocessor machines are not available. This is true because, in many cases, a good parallel algorithm for one problem may turn out to be useful for design ..."
Abstract

Cited by 246 (7 self)
 Add to MetaCart
(Show Context)
Abstract. The goal of this paper is to point out that analyses of parallelism in computational problems have practical implications even when multiprocessor machines are not available. This is true because, in many cases, a good parallel algorithm for one problem may turn out to be useful for designing an efficient serial algorithm for another problem. A d ~ eframework d for cases like this is presented. Particular cases, which are discussed in this paper, provide motivation for examining parallelism in sorting, selection, minimumspanningtree, shortest route, maxflow, and matrix multiplication problems, as well as in scheduling and locational problems.
Models of Machines and Computation for Mapping in Multicomputers
, 1993
"... It is now more than a quarter of a century since researchers started publishing papers on mapping strategies for distributing computation across the computation resource of multiprocessor systems. There exists a large body of literature on the subject, but there is no commonlyaccepted framework ..."
Abstract

Cited by 80 (1 self)
 Add to MetaCart
It is now more than a quarter of a century since researchers started publishing papers on mapping strategies for distributing computation across the computation resource of multiprocessor systems. There exists a large body of literature on the subject, but there is no commonlyaccepted framework whereby results in the field can be compared. Nor is it always easy to assess the relevance of a new result to a particular problem. Furthermore, changes in parallel computing technology have made some of the earlier work of less relevance to current multiprocessor systems. Versions of the mapping problem are classified, and research in the field is considered in terms of its relevance to the problem of programming currently available hardware in the form of a distributed memory multiple instruction stream multiple data stream computer: a multicomputer.
Geometric Range Searching
, 1994
"... In geometric range searching, algorithmic problems of the following type are considered: Given an npoint set P in the plane, build a data structure so that, given a query triangle R, the number of points of P lying in R can be determined quickly. Problems of this type are of crucial importance in c ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
In geometric range searching, algorithmic problems of the following type are considered: Given an npoint set P in the plane, build a data structure so that, given a query triangle R, the number of points of P lying in R can be determined quickly. Problems of this type are of crucial importance in computational geometry, as they can be used as subroutines in many seemingly unrelated algorithms. We present a survey of results and main techniques in this area.
Derandomization in Computational Geometry
, 1996
"... We survey techniques for replacing randomized algorithms in computational geometry by deterministic ones with a similar asymptotic running time. 1 Randomized algorithms and derandomization A rapid growth of knowledge about randomized algorithms stimulates research in derandomization, that is, repla ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We survey techniques for replacing randomized algorithms in computational geometry by deterministic ones with a similar asymptotic running time. 1 Randomized algorithms and derandomization A rapid growth of knowledge about randomized algorithms stimulates research in derandomization, that is, replacing randomized algorithms by deterministic ones with as small decrease of efficiency as possible. Related to the problem of derandomization is the question of reducing the amount of random bits needed by a randomized algorithm while retaining its efficiency; the derandomization can be viewed as an ultimate case. Randomized algorithms are also related to probabilistic proofs and constructions in combinatorics (which came first historically), whose development has similarly been accompanied by the effort to replace them by explicit, nonrandom constructions whenever possible. Derandomization of algorithms can be seen as a part of an effort to map the power of randomness and explain its role. ...
Multiplesource shortest paths in embedded graphs
, 2012
"... Let G be a directed graph with n vertices and nonnegative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortestpath distance from any vertex on the boundary of ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Let G be a directed graph with n vertices and nonnegative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortestpath distance from any vertex on the boundary of f to any other vertex in G can be retrieved in O(log n) time. Our result directly generalizes the O(n log n)time algorithm of Klein [Multiplesource shortest paths in planar graphs. In Proc. 16th Ann. ACMSIAM Symp. Discrete Algorithms, 2005] for multiplesource shortest paths in planar graphs. Intuitively, our preprocessing algorithm maintains a shortestpath tree as its source point moves continuously around the boundary of f. As an application of our algorithm, we describe algorithms to compute a shortest noncontractible or nonseparating cycle in embedded, undirected graphs in O(g² n log n) time.
Using Sparsification for Parametric Minimum Spanning Tree Problems
 Nordic J. Computing
, 1996
"... Two applications of sparsification to parametric computing are given. The first is a fast algorithm for enumerating all distinct minimum spanning trees in a graph whose edge weights vary linearly with a parameter. The second is an asymptotically optimal algorithm for the minimum ratio spanning t ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Two applications of sparsification to parametric computing are given. The first is a fast algorithm for enumerating all distinct minimum spanning trees in a graph whose edge weights vary linearly with a parameter. The second is an asymptotically optimal algorithm for the minimum ratio spanning tree problem, as well as other search problems, on dense graphs. 1 Introduction In the parametric minimum spanning tree problem, one is given an nnode, medge undirected graph G where each edge e has a linear weight function w e (#)=a e +#b e . Let Z(#) denote the weight of the minimum spanning tree relative to the weights w e (#). It can be shown that Z(#) is a piecewise linear concave function of # [Gus80]; the points at which the slope of Z changes are called breakpoints. We shall present two results regarding parametric minimum spanning trees. First, we show that Z(#) can be constructed in O(min{nm log n, TMST (2n, n) # Department of Computer Science, Iowa State University, Ames, IA...
Parametric Problems on Graphs of Bounded Treewidth
, 1992
"... We consider optimization problems on weighted graphs where vertex and edge weights are polynomial functions of a parameter . We show that, if a problem satisfies certain regularity properties and the underlying graph has bounded treewidth, the number of changes in the optimum solution is polynomial ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We consider optimization problems on weighted graphs where vertex and edge weights are polynomial functions of a parameter . We show that, if a problem satisfies certain regularity properties and the underlying graph has bounded treewidth, the number of changes in the optimum solution is polynomially bounded. We also show that the description of the sequence of optimum solutions can be constructed in polynomial time and that certain parametric search problems can be solved in O(n log n) time, where n is the number of vertices in the graph.
Optimal dynamic remapping of parallel computations
 IEEE TRANSACTIONS ON COMPUTERS
, 1990
"... A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. We consider the decision problem regarding the remapping of workload to processors in a parallel computation when (i) Ihe uiility of remapping md the future behavior of the workload i ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. We consider the decision problem regarding the remapping of workload to processors in a parallel computation when (i) Ihe uiility of remapping md the future behavior of the workload is uncertain, and (ii) phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. We address the fundamental problem of balancing the expected remapping performance gain against the delay cost. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure: the optimal decision at any decision step is followed by comparing the probability of remapping gain against a threshold. However, threshold calculation requires a priori estimation of the performance gain achieved by remap ping. Because this gain may not be predictable, we examine the performance of a heuristic policy that does not require estimation of the gain. In most cases we find nearly optimal performance if remapping
Algorithms for Joint Optimization of Stability and Diversity in Planning Combinatorial Libraries of Chimeric Proteins
"... Abstract. In engineering protein variants by constructing and screening combinatorial libraries of chimeric proteins, two complementary and competing goals are desired: the new proteins must be similar enough to the evolutionarilyselected wildtype proteins to be stably folded, and they must be dif ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In engineering protein variants by constructing and screening combinatorial libraries of chimeric proteins, two complementary and competing goals are desired: the new proteins must be similar enough to the evolutionarilyselected wildtype proteins to be stably folded, and they must be different enough to display functional variation. We present here the first method, Staversity, to simultaneously optimize stability and diversity in selecting sets of breakpoint locations for sitedirected recombination. Our goal is to uncover all “undominated ” breakpoint sets, for which no other breakpoint set is better in both factors. Our first algorithm finds the undominated sets serving as the vertices of the lower envelope of the twodimensional (stability and diversity) convex hull containing all possible breakpoint sets. Our second algorithm identifies additional breakpoint sets in the concavities that are either undominated or dominated only by undiscovered breakpoint sets within a distance bound computed by the algorithm. Both algorithms are efficient, requiring only
Algorithmic techniques for geometric optimization
 In Computer Science Today: Recent Trends and Developments, Lecture Notes in Computer Science
, 1995
"... ..."