Results 1  10
of
45
A general approximation technique for constrained forest problems
 SIAM J. COMPUT.
, 1995
"... We present a general approximation technique for a large class of graph problems. Our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles, or paths satisfying certain requirements. In particular, many basic combinatorial optimization proble ..."
Abstract

Cited by 423 (21 self)
 Add to MetaCart
We present a general approximation technique for a large class of graph problems. Our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles, or paths satisfying certain requirements. In particular, many basic combinatorial optimization problems fit in this framework, including the shortest path, minimumcost spanning tree, minimumweight perfect matching, traveling salesman, and Steiner tree problems. Our technique produces approximation algorithms that run in O(n log n) time and come within a factor of 2 of optimal for most of these problems. For instance, we obtain a 2approximation algorithm for the minimumweight perfect matching problem under the triangle inequality. Our running time of O(n log n) time compares favorably with the best strongly polynomial exact algorithms running in O(n 3) time for dense graphs. A similar result is obtained for the 2matching problem and its variants. We also derive the first approximation algorithms for many NPcomplete problems, including the nonfixed pointtopoint connection problem, the exact path partitioning problem, and complex locationdesign problems. Moreover, for the prizecollecting traveling salesman or Steiner tree problems, we obtain 2approximation algorithms, therefore improving the previously bestknown performance guarantees of 2.5 and 3, respectively [Math. Programming, 59 (1993), pp. 413420].
A Data Structure for Dynamic Trees
, 1983
"... A data structure is proposed to maintain a collection of vertexdisjoint trees under a sequence of two kinds of operations: a link operation that combines two trees into one by adding an edge, and a cut operation that divides one tree into two by deleting an edge. Each operation requires O(log n) ti ..."
Abstract

Cited by 353 (21 self)
 Add to MetaCart
A data structure is proposed to maintain a collection of vertexdisjoint trees under a sequence of two kinds of operations: a link operation that combines two trees into one by adding an edge, and a cut operation that divides one tree into two by deleting an edge. Each operation requires O(log n) time. Using this data structure, new fast algorithms are obtained for the following problems: (1) Computing nearest common ancestors. (2) Solving various network flow problems including finding maximum flows, blocking flows, and acyclic flows. (3) Computing certain kinds of constrained minimum spanning trees. (4) Implementing the network simplex algorithm for minimumcost flows. The most significant application is (2); an O(mn log n)time algorithm is obtained to find a maximum flow in a network of n vertices and m edges, beating by a factor of log n the fastest algorithm previously known for sparse graphs.
Approximate string matching
 ACM Computing Surveys
, 1980
"... Approximate matching of strings is reviewed with the aim of surveying techniques suitable for finding an item in a database when there may be a spelling mistake or other error in the keyword. The methods found are classified as either equivalence or similarity problems. Equivalence problems are seen ..."
Abstract

Cited by 158 (0 self)
 Add to MetaCart
Approximate matching of strings is reviewed with the aim of surveying techniques suitable for finding an item in a database when there may be a spelling mistake or other error in the keyword. The methods found are classified as either equivalence or similarity problems. Equivalence problems are seen to be readily solved using canonical forms. For sinuiarity problems difference measures are surveyed, with a full description of the wellestablmhed dynamic programming method relating this to the approach using probabilities and likelihoods. Searches for approximate matches in large sets using a difference function are seen to be an open problem still, though several promising ideas have been suggested. Approximate matching (error correction) during parsing is briefly reviewed.
Nonlinearity of DavenportSchinzel sequences and of generalized path compression schemes
 Combinatorica
, 1986
"... DavenportSchinzel sequences are sequences that do not contain forbidden subsequences of alternating symbols. They arise in the computation of the envelope of a set of functions. We show that the maximal length of a DavenportSchinzel sequence composed of n symbols is 6(noc(n»), where t1.(n)is the f ..."
Abstract

Cited by 116 (17 self)
 Add to MetaCart
(Show Context)
DavenportSchinzel sequences are sequences that do not contain forbidden subsequences of alternating symbols. They arise in the computation of the envelope of a set of functions. We show that the maximal length of a DavenportSchinzel sequence composed of n symbols is 6(noc(n»), where t1.(n)is the functional inverse of Ackermann's function, and is thus very slowly increasing to infinity. This is achieved by establishing an equivalence between such sequences and generalized path compression schemes on rooted trees, and then by analyzing these schemes. 1.
A THEORY OF ALTERNATING PATHS AND BLOSSOMS FOR PROVING CORRECTNESS OF THE O(√VE) GENERAL GRAPH MAXIMUM MATCHING ALGORITHM
, 1994
"... ..."
Improved shortest paths on the word RAM
 IN: 27TH COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP), IN: LECTURE NOTES IN COMPUT. SCI
, 2000
"... Thorup recently showed that singlesource shortestpaths problems in undirected networks with n vertices, m edges, and edge weights drawn from {0,...,2 w − 1} can be solved in O(n + m) time and space on a unitcost randomaccess machine with a word length of w bits. His algorithm works by traversin ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
Thorup recently showed that singlesource shortestpaths problems in undirected networks with n vertices, m edges, and edge weights drawn from {0,...,2 w − 1} can be solved in O(n + m) time and space on a unitcost randomaccess machine with a word length of w bits. His algorithm works by traversing a socalled component tree. Two new related results are provided here. First, and most importantly, Thorup’s approach is generalized from undirected to directed networks. The resulting time bound, O(n + m log w), is the best deterministic linearspace bound known for sparse networks unless w is superpolynomial in log n. As an application, allpairs shortestpaths problems in directed networks with n vertices, m edges, and edge weights in {−2 w,...,2 w} can be solved in O(nm + n 2 log log n) time and O(n + m) space (not counting the output space). Second, it is shown that the component tree for an undirected network can be constructed in deterministic linear time and space with a simple algorithm, to be contrasted with a complicated and impractical solution suggested by Thorup. Another contribution of the present paper is a greatly simplified view of the principles underlying algorithms based on component trees.
Fast Congruence Closure and Extensions
, 2006
"... Congruence closure algorithms for deduction in ground equational theories are ubiquitous in many (semi)decision procedures used for verification and automated deduction. In many of these applications one needs an incremental algorithm that is moreover capable of recovering, among the thousands of i ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Congruence closure algorithms for deduction in ground equational theories are ubiquitous in many (semi)decision procedures used for verification and automated deduction. In many of these applications one needs an incremental algorithm that is moreover capable of recovering, among the thousands of input equations, the small subset that explains the equivalence of a given pair of terms. In this paper we present an algorithm satisfying all these requirements. First, building on ideas from abstract congruence closure algorithms [Kapur (1997,RTA), Bachmair & Tiwari (2000,CADE)], we present a very simple and clean incremental congruence closure algorithm and show that it runs in the best known time O(n log n). After that, we introduce a proofproducing unionfind data structure that is then used for extending our congruence closure algorithm, without increasing the overall O(n log n) time, in order to produce a kstep explanation for a given equation in almost optimal time (quasilinear in k). Finally, we show that the previous algorithms can be smoothly extended, while still obtaining the same asymptotic time bounds, in order to support the interpreted functions symbols successor and predecessor, which have been shown to be very useful in applications such as microprocessor verification.
The Travelling Salesman Problem and Minimum Matching in the Unit Square
, 1983
"... We show that the cost (length) Of the shortest traveling salesman tour through n points in the unit square is, in the worst case, aopt v/n + o (x/n), where 1.075 atsPopt < = 1.414. The cost of the minimum matching of n points in the unit square is shown to be, in the worst case, a opt 4 + O(4), ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We show that the cost (length) Of the shortest traveling salesman tour through n points in the unit square is, in the worst case, aopt v/n + o (x/n), where 1.075 atsPopt < = 1.414. The cost of the minimum matching of n points in the unit square is shown to be, in the worst case, a opt 4 + O(4), where mat 0.537 opt <0.707 Furthermore, for each of these two problems there is an almost linear time heuristic algorithm whose worst case cost is, neglecting lower order terms, as low as possible.
Efficient Algorithms for the Domination Problems on Interval and CircularArc Graphs
 SIAM J. Comput
, 1998
"... Abstract. This paper first presents a unified approach to design efficient algorithms for the weighted domination problem and its three variants, i.e., the weighted independent, connected, and total domination problems, on interval graphs. Given an interval model with endpoints sorted, these algorit ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper first presents a unified approach to design efficient algorithms for the weighted domination problem and its three variants, i.e., the weighted independent, connected, and total domination problems, on interval graphs. Given an interval model with endpoints sorted, these algorithms run in time O(n) orO(n log log n) where n is the number of vertices. The results are then extended to solve the same problems on circulararc graphs in O(n + m) time where m is the number of edges of the input graph.
Procedure Placement using TemporalOrdering Information: dealing with Code Size Expansion
 JOURNAL OF EMBEDDED COMPUTING
"... In a directmapped instruction cache, all instructions that have the same memory address modulo the cache size, share a common and unique cache slot. Instruction cache conflicts can be partially handled at linked time by procedure placement. Pettis and Hansen give in [1] an algorithm that reorders p ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
In a directmapped instruction cache, all instructions that have the same memory address modulo the cache size, share a common and unique cache slot. Instruction cache conflicts can be partially handled at linked time by procedure placement. Pettis and Hansen give in [1] an algorithm that reorders procedures in memory by aggregating them in a greedy fashion. The Gloy and Smith algorithm [2] greatly decreases the number of conflictmisses but increases the code size by allowing gaps between procedures. The latter contains two main stages: the cacheplacement phase assigns modulo addresses to minimizes cacheconflicts; the memoryplacement phase assigns final memory addresses under the modulo placement constraints, and minimizes the code size expansion. In this paper: (1) we prove the NPcompleteness of the cacheplacement problem; (2) we provide an optimal algorithm to the memoryplacement problem with complexity O(n min(n, L)α(n)) (n is the number of procedures, L the cache size, α is the inverse Ackermann’s function that is lower than 4 in practice); (3) we take final program size into consideration during the cacheplacement phase. Our modifications to the Gloy and Smith algorithm gives on average a code size expansion of 8 % over the original program size, while the initial algorithm gave an expansion of 177%. The cache miss reduction is nearly the same as the Gloy and Smith solution with 35 % cache miss reduction.