Results 11 - 20
of
24
An Inverse-Ackermann Style Lower Bound for Online Minimum Spanning Tree Verification
- Combinatorica
"... 1 Introduction The minimum spanning tree (MST) problem has seen a flurry of activity lately, driven largely by the success of a new approach to the problem. The recent MST algorithms [20, 8, 29, 28], despite their superficial differences, are all based on the idea of progressively improving an appro ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
1 Introduction The minimum spanning tree (MST) problem has seen a flurry of activity lately, driven largely by the success of a new approach to the problem. The recent MST algorithms [20, 8, 29, 28], despite their superficial differences, are all based on the idea of progressively improving an approximately minimum solution, until the actual minimum spanning tree is found. It is still likely that this progressive improvement approach will bear fruit. However, the current
A Simpler Implementation and Analysis of Chazelle’s Soft Heaps
- In Proc. of the 19th ACM-SIAM Symposium on Discrete Algorithms
, 2009
"... Chazelle (JACM 47(6), 2000) devised an approximate meldable priority queue data structure, called Soft Heaps, and used it to obtain the fastest known deterministic comparison-based algorithm for computing minimum spanning trees, as well as some new algorithms for selection and approximate sorting pr ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Chazelle (JACM 47(6), 2000) devised an approximate meldable priority queue data structure, called Soft Heaps, and used it to obtain the fastest known deterministic comparison-based algorithm for computing minimum spanning trees, as well as some new algorithms for selection and approximate sorting problems. If n elements are inserted into a collection of soft heaps, then up to εn of the elements still contained in these heaps, for a given error parameter ε, maybecorrupted, i.e., have their keys artificially increased. In exchange for allowing these corruptions, each soft heap operation is performed in O(log 1 ε) amortized time. Chazelle’s soft heaps are derived from the binomial heaps data structure in which each priority queue is composed of a collection of binomial trees. We describe a simpler and more direct implementation of soft heaps in which each priority queue is composed of a collection of standard binary trees. Our implementation has the advantage that no clean-up operations similar to the ones used in Chazelle’s implementation are required. We also present a concise and unified potential-based amortized analysis of the new implementation. 1
Improved Methods for Solving Traffic Flow Problems in Dynamic Networks
, 2002
"... Dynamic networks are pervasive, present in many transportation and non-transporta-tion contexts. We present improved methods for solving two of the primary problems in dynamic networks: dynamic shortest paths and the Dynamic Network Loading Problem (DNLP). In each case we also propose a solution alg ..."
Abstract
- Add to MetaCart
(Show Context)
Dynamic networks are pervasive, present in many transportation and non-transporta-tion contexts. We present improved methods for solving two of the primary problems in dynamic networks: dynamic shortest paths and the Dynamic Network Loading Problem (DNLP). In each case we also propose a solution algorithm and an imple-mentation of the algorithm. We first explore the one-to-all dynamic shortest path problem for discrete time networks for all departure times. A new framework for the problem is proposed in which the problem is viewed as series of static reoptimization problems. By posing the problem in this manner, we are able to reuse the information regarding the short-est path trees calculated for earlier departure times. The results of computational tests are provided showing significant savings in computation times over traditional methods when the percentage of dynamic links is small. We next present a method for achieving an exact solution to a class of the continuous-time and space model formulation of the DNLP. The model of a link
4. TITLE AND SUBTITLE
, 2013
"... Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments ..."
Abstract
- Add to MetaCart
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.
List Heuristic Scheduling Algorithms for Distributed Memory Systems with Improved Time Complexity
"... Abstract. We present a compile time list heuristic scheduling algorithm called Low Cost Critical Path algorithm (LCCP) for the distributed memory systems. LCCP has low scheduling cost for both homogeneous and heterogeneous systems. In some recent papers list heuristic scheduling algorithms keep thei ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. We present a compile time list heuristic scheduling algorithm called Low Cost Critical Path algorithm (LCCP) for the distributed memory systems. LCCP has low scheduling cost for both homogeneous and heterogeneous systems. In some recent papers list heuristic scheduling algorithms keep their scheduling cost low by using a fixed size heap and a FIFO, where the heap always keeps fixed number of tasks and the excess tasks are inserted in the FIFO. When the heap has empty spaces, tasks are inserted in it from the FIFO. The best known list scheduling algorithm based on this strategy requires two heap restoration operations, one after extraction and another after insertion. Our LCCP algorithm improves on this by using only one such operation for both the extraction and insertion, which in theory reduces the scheduling cost without compromising the scheduling performance. In our experiment we compare LCCP with other well known list scheduling algorithms and it shows that LCCP is the fastest among all. 1
1.1 History and Content........................... 6
, 2010
"... 1.2 The problem, formally.......................... 6 1.2.1 Some definitions......................... 6 ..."
Abstract
- Add to MetaCart
(Show Context)
1.2 The problem, formally.......................... 6 1.2.1 Some definitions......................... 6