Results 1  10
of
12
Maintenance of a Minimum Spanning Forest in a Dynamic Plane Graph
, 1992
"... We give an efficient algorithm for maintaining a minimum spanning forest of a plane graph subject to online modifications. The modifications supported include changes in the edge weights, and insertion and deletion of edges and vertices which are consistent with the given embedding. Our algorithm r ..."
Abstract

Cited by 68 (26 self)
 Add to MetaCart
We give an efficient algorithm for maintaining a minimum spanning forest of a plane graph subject to online modifications. The modifications supported include changes in the edge weights, and insertion and deletion of edges and vertices which are consistent with the given embedding. Our algorithm runs in O(log n) time per operation and O(n) space.
Dynamic Graph Algorithms
, 1999
"... Introduction In many applications of graph algorithms, including communication networks, graphics, assembly planning, and VLSI design, graphs are subject to discrete changes, such as additions or deletions of edges or vertices. In the last decade there has been a growing interest in such dynamicall ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
Introduction In many applications of graph algorithms, including communication networks, graphics, assembly planning, and VLSI design, graphs are subject to discrete changes, such as additions or deletions of edges or vertices. In the last decade there has been a growing interest in such dynamically changing graphs, and a whole body of algorithms and data structures for dynamic graphs has been discovered. This chapter is intended as an overview of this field. In a typical dynamic graph problem one would like to answer queries on graphs that are undergoing a sequence of updates, for instance, insertions and deletions of edges and vertices. The goal of a dynamic graph algorithm is to update efficiently the solution of a problem after dynamic changes, rather than having to recompute it from scratch each time. Given their powerful versatility, it is not surprising that dynamic algorithms and dynamic data structures are often more difficult to design and analyze than their static c
Parallel RealTime Optimization: Beyond Speedup
 PARALLEL PROCESSING LETTERS
, 1999
"... Traditionally, interest in parallel computation centered around the speedup provided by parallel algorithms over their sequential counterparts. In this paper, we ask a different type of question: Can parallel computers, due to their speed, do more than simply speed up the solution to a problem? ..."
Abstract

Cited by 27 (25 self)
 Add to MetaCart
Traditionally, interest in parallel computation centered around the speedup provided by parallel algorithms over their sequential counterparts. In this paper, we ask a different type of question: Can parallel computers, due to their speed, do more than simply speed up the solution to a problem? We show that for realtime optimization problems, a parallel computer can obtain a solution that is better than that obtained by a sequential one. Specifically, a sequential and a parallel algorithm are exhibited for the problem of computing the bestpossible approximation to the minimumweight spanning tree of a connected, undirected and weighted graph whose vertices and edges are not all available at the outset, but instead arrive in real time. While the parallel algorithm succeeds in computing the exact minimumweight spanning tree, the sequential algorithm can only manage to obtain an approximate solution. In the worst case, the ratio of the weight of the solution obtained seque...
Offline Algorithms for Dynamic Minimum Spanning Tree Problems
, 1994
"... We describe an efficient algorithm for maintaining a minimum spanning tree (MST) in a graph subject to a sequence of edge weight modifications. The sequence of minimum spanning trees is computed offline, after the sequence of modifications is known. The algorithm performs O(log n) work per modificat ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
We describe an efficient algorithm for maintaining a minimum spanning tree (MST) in a graph subject to a sequence of edge weight modifications. The sequence of minimum spanning trees is computed offline, after the sequence of modifications is known. The algorithm performs O(log n) work per modification, where n is the number of vertices in the graph. We use our techniques to solve the offline geometric MST problem for a planar point set subject to insertions and deletions; our algorithm for this problem performs O(log 2 n) work per modification. No previous dynamic geometric MST algorithm was known.
Parallel RealTime Numerical Computation: Beyond Speedup III
 International Journal of Computers and their Applications, Special Issue on High Performance Computing Systems
"... Parallel computers can do more than simply speed up sequential computations. They are capable of finding solutions that are far better in quality than those obtained by sequential computers. This fact is demonstrated by analyzing sequential and parallel solutions to numerical problems in a realtime ..."
Abstract

Cited by 16 (15 self)
 Add to MetaCart
Parallel computers can do more than simply speed up sequential computations. They are capable of finding solutions that are far better in quality than those obtained by sequential computers. This fact is demonstrated by analyzing sequential and parallel solutions to numerical problems in a realtime paradigm. In this setting, numerical data required to solve a problem are received as input by a computer system, at regular intervals. The computer must process its inputs as soon as they arrive. It must also produce its outputs at regular intervals, as soon as they are available. We show that for some realtime numerical problems a parallel computer can deliver a solution that is significantly more accurate than when computed by a sequential computer. Similar results were derived recently in the areas of realtime optimization and realtime cryptography. Key words and phrases: Parallelism, realtime computation, numerical analysis. This research was supported by the Natural Sciences a...
Parallel RealTime Computation: Sometimes Quantity Means Quality
 Computing and Informatics
, 2000
"... The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. Th ..."
Abstract

Cited by 15 (14 self)
 Add to MetaCart
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. The thesis of this paper is that a second equally important motivation for using parallel computers exists. Specifically, the following question is posed: Can parallel computers, thanks to their multiple processors, do more than simply speed up the solution to a problem? We show that within the paradigm of realtime computation, some classes of problems have the property that a solution to a problem in the class, when computed in parallel, is far superior in quality than the best one obtained on a sequential computer. What constitutes a better solution depends on the problem under consideration. Thus, `better' means `closer to optimal' for optimization problems, `more secure' for crypto...
Parallel RealTime Cryptography: Beyond Speedup II
 II, PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED PROCESSING TECHNIQUES AND APPLICATIONS, LAS VEGAS
, 2000
"... The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. However, it was shown recently that a second equally important motivation for using parallel computers exists: Within the paradigm of realtime computation, some cl ..."
Abstract

Cited by 9 (9 self)
 Add to MetaCart
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. However, it was shown recently that a second equally important motivation for using parallel computers exists: Within the paradigm of realtime computation, some classes of problems have the property that a solution to a problem in the class computed in parallel is better than the one obtained on a sequential computer. What constitutes a better solution depends on the problem under consideration. Thus, for optimization problems, `better' means `closer to optimal'. The present paper continues this line of inquiry by exploring another class enjoying the aforementioned property, namely, cryptographic problems in a realtime setting. In this class, `better' means `more secure'. A realtime cryptographic problem is presented for which the parallel solution is significantly better than a sequential one.
Nonlinearity, Maximization, and Parallel RealTime Computation
 Proceedings of the 12th Conference on Parallel and Distributed Computing and Systems, Las Vegas
, 2000
"... This paper focuses on the improvement in the quality of computation provided by parallelism. The problem of interest is that of computing the maximum of a nonlinear feedback function in a realtime environment. We show that the solution obtained in parallel is asymptotically better than that comp ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
This paper focuses on the improvement in the quality of computation provided by parallelism. The problem of interest is that of computing the maximum of a nonlinear feedback function in a realtime environment. We show that the solution obtained in parallel is asymptotically better than that computed sequentially. Key words and phrases: Parallelism, realtime computation, nonlinear feedback function, maximization. This research was supported by the Natural Sciences and Engineering Research Council of Canada. 1 1 Introduction The central motivation behind parallelism has always been the speeding up of sequential computations. Recently, another aspect of parallel computation was brought to light. It was shown that under some circumstances it is possible to obtain in parallel solutions to computational problems that are significantly better than any solutions computed sequentially. This phenomenon was demonstrated, in a realtime environment, for problems in combinatorial optimiz...
Optimal Algorithms to Find the Most Vital Edge of a Minimum Spanning Tree
, 1995
"... The problem of finding the most vital edge with respect to a minimum spanning tree of a given connected and weighted graph (with m edges and n vertices) is considered. New sequential and parallel algorithms (3 each) for the problem are proposed, and a lower bound\Omega\Gamma m) is established. We c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The problem of finding the most vital edge with respect to a minimum spanning tree of a given connected and weighted graph (with m edges and n vertices) is considered. New sequential and parallel algorithms (3 each) for the problem are proposed, and a lower bound\Omega\Gamma m) is established. We characterize the set of entering edges and show that the cardinality of this set is O(n). We show the connection between most vital edge problem and the minimum spanning tree update problems and exploit this idea in developing one of the proposed sequential algorithms. Two of our sequential algorithms are optimal. One of our parallel algorithms is optimal if the underlying graph is dense, or planar. We also consider a related problem for weighted matroids. Keywords: Data structures, design of algorithms, parallel algorithms, minimum spanning trees, most vital edge, complexity, matroids. 1 INTRODUCTION Networks are ubiquitous in many scientific and technological applications. A few examples ...
Lower And Upper Bounds For Incremental Algorithms
, 1992
"... An incremental algorithm (also called a dynamic update algorithm) updates the answer to some problem after an incremental change is made in the input. We examine methods for bounding the performance of such algorithms. First, quite general but relatively weak bounds are considered, along with a care ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
An incremental algorithm (also called a dynamic update algorithm) updates the answer to some problem after an incremental change is made in the input. We examine methods for bounding the performance of such algorithms. First, quite general but relatively weak bounds are considered, along with a careful examination of the conditions under which they hold. Next, a more powerful proof method, the Incremental Relative Lower Bound is presented, along with its application to a number of important problems. We then examine an alternative approach, deltaanalysis, which had been proposed previously, apply it to several new problems and show how it can be extended. For the specific problem of updating the transitive closure of an acyclic digraph, we present the first known incremental algorithm that is efficient in the deltaanalysis sense. Finally, we criti...