Results 1  10
of
39
Dissemination Of Information In Interconnection Networks (Broadcasting & Gossiping)
, 1996
"... this article follows the aims stated above. The first section introduces this research area. The basic definitions are given and the fundamental, simple observations concerning the relations among the complexity measures defined are carefully explained. This section is ..."
Abstract

Cited by 99 (7 self)
 Add to MetaCart
this article follows the aims stated above. The first section introduces this research area. The basic definitions are given and the fundamental, simple observations concerning the relations among the complexity measures defined are carefully explained. This section is
Experimental Results from an Automatic Test Case Generator
 ACM Transactions on Software Engineering Methodology
, 1993
"... Constraintbased testing is a novel way of generating test data to detect specific types of common programming faults. The conditions under which faults will be detected are encoded as mathematical systems of constraints in terms of program symbols. A set of tools, collectively called Godzilla, has ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
Constraintbased testing is a novel way of generating test data to detect specific types of common programming faults. The conditions under which faults will be detected are encoded as mathematical systems of constraints in terms of program symbols. A set of tools, collectively called Godzilla, has been implemented that automatically generates constraint systems and solves them to create test cases for use by the Mothra testing system. Experimental results from using Godzilla show that the technique can produce test data that is very close in terms of mutationadequacy to test data that is produced manually, and at substantially reduced cost. Additionally, these experiments have suggested a new procedure for unit testing, where test cases are viewed as throwaway items rather than scarce resources. 1 INTRODUCTION This paper describes experimental results that are based on a new technique for generating test data. This technique, called constraintbased testing (CBT), uses the source code to automatically generate test data that attempts to satisfy the mutationadequacy criteria. Elsewhere, we describe the technique [9, 12], and the details and algorithms of the implementation [29]; here we describe a set of experiments that measure CBT.
Static caching for incremental computation
 ACM Trans. Program. Lang. Syst
, 1998
"... A systematic approach is given for deriving incremental programs that exploit caching. The cacheandprune method presented in the article consists of three stages: (I) the original program is extended to cache the results of all its intermediate subcomputations as well as the nal result, (II) the e ..."
Abstract

Cited by 47 (19 self)
 Add to MetaCart
A systematic approach is given for deriving incremental programs that exploit caching. The cacheandprune method presented in the article consists of three stages: (I) the original program is extended to cache the results of all its intermediate subcomputations as well as the nal result, (II) the extended program is incrementalized so that computation on a new input can use all intermediate results on an old input, and (III) unused results cached by the extended program and maintained by the incremental program are pruned away, l e a ving a pruned extended program that caches only useful intermediate results and a pruned incremental program that uses and maintains only the useful results. All three stages utilize static analyses and semanticspreserving transformations. Stages I and III are simple, clean, and fully automatable. The overall method has a kind of optimality with respect to the techniques used in Stage II. The method can be applied straightforwardly to provide a systematic approach to program improvement via caching.
A Simple Approximation Algorithm for the Weighted Matching Problem
 Information Processing Letters
, 2003
"... We present a linear time approximation algorithm with a performance ratio of 1/2 for nding a maximum weight matching in an arbitrary graph. Such a result is already known and is due to Preis [7]. ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
We present a linear time approximation algorithm with a performance ratio of 1/2 for nding a maximum weight matching in an arbitrary graph. Such a result is already known and is due to Preis [7].
Dynamic programming via static incrementalization
 In Proceedings of the 8th European Symposium on Programming
, 1999
"... Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dyn ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes polynomial time. This paper describes a systematic method for transforming programs written as straightforward recursions into programs that use dynamic programming. The method extends the original program to cache all possibly computed values, incrementalizes the extended program with respect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting incremental program to form an optimized new program. Incrementalization statically exploits semantics of both control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achieving drastic program speedups. Compared with previous methods that perform memoization or tabulation, the method based on incrementalization is more powerful and systematic. It has been implemented and applied to numerous problems and succeeded on all of them. 1
A Linear Time Approximation Algorithm for Weighted Matchings in Graphs
, 2003
"... Approximation algorithms have so far mainly been studied for problems that are not known to have polynomial time algorithms for solving them exactly. Here we propose an approximation algorithm for the weighted matching problem in graphs which can be solved in polynomial time. The weighted matching p ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Approximation algorithms have so far mainly been studied for problems that are not known to have polynomial time algorithms for solving them exactly. Here we propose an approximation algorithm for the weighted matching problem in graphs which can be solved in polynomial time. The weighted matching problem is to find a matching in an edge weighted graph that has maximum weight. The first polynomial time algorithm for this problem was given by Edmonds in 1965. The fastest known algorithm for the weighted matching problem has a running time of O(nm+n 2 log n). Many real world problems require graphs of such large size that this running time is too costly. Therefore there is considerable need for faster approximation algorithms for the weighted matching problem. We present a linear time approximation algorithm for the weighted matching problem with a performance ratio arbitrarily close to 2/3
Elimination Of Infrequent Variables Improves Average Case Performance Of Satisfiability Algorithms
 SIAM J. Comput
, 1991
"... . We consider preprocessing a random instance I of CNF Satisfiability in order to remove infrequent variables (those which appear once or twice in an instance) from I. The model used to generate random instances is the popular randomclausesize model with parametersn, the number of clauses, r, the ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
. We consider preprocessing a random instance I of CNF Satisfiability in order to remove infrequent variables (those which appear once or twice in an instance) from I. The model used to generate random instances is the popular randomclausesize model with parametersn, the number of clauses, r, the number of Boolean variables from which clauses are composed, and p, the probability that a variable appears in a clause as a positive (or negative) literal. It is shown that exhaustive search over such preprocessed instances runs in polynomial average time over a significantly larger parameter space than has been shown for any other algorithm under the randomclausesize model when n = r ffl , ffl ! 1, and pr ! p fflr ln(r). Specifically, the results are that random instances of Satisfiability are "easy" in the average case if n = r ffl , 2=3 ? ffl ? 0, and pr ! (ln(n)=4) 1=3 r 2=3\Gammaffl ; or n = r ffl , 1 ? ffl 2=3, pr ! (1 \Gamma ffl \Gamma ffi) ln(n)=ffl for any ffi ? 0...
Garbage Collection and Other Optimizations
, 1987
"... Existing techniques for garbage collection and machine code optimizations can interfere with each other. The inability to fully optimize code in a garbagecollected system is a hidden cost of garbage collection. One solution to this problem is proposed; an inexpensive protocol that permits most opti ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Existing techniques for garbage collection and machine code optimizations can interfere with each other. The inability to fully optimize code in a garbagecollected system is a hidden cost of garbage collection. One solution to this problem is proposed; an inexpensive protocol that permits most optimizations and garbage collection to coexist. A second approach to this problem and a separate problem in its own right is to reduce the need for garbage collection. This requires analysis of storage lifetime. Inferring storage lifetime is di#cult in a language with nested and recursive data structures, but it is precisely these languages in which garbage collection is most useful. An improved analysis for "storage containment" is described. Containment information can be represented in a directed graph. The derivation of this graph falls into a monotone dataflow analysis framework; in addition, the derivation has the ChurchRosser property. The graphs produced in the analysis of a valuea...
A lineartime approximation algorithm for weighted matchings in graphs
 ACM TRANSACTIONS ON ALGORITHMS
, 2005
"... Approximation algorithms have so far mainly been studied for problems that are not known to have polynomial time algorithms for solving them exactly. Here we propose an approximation algorithm for the weighted matching problem in graphs which can be solved in polynomial time. The weighted matching p ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Approximation algorithms have so far mainly been studied for problems that are not known to have polynomial time algorithms for solving them exactly. Here we propose an approximation algorithm for the weighted matching problem in graphs which can be solved in polynomial time. The weighted matching problem is to find a matching in an edge weighted graph that has maximum weight. The first polynomialtime algorithm for this problem was given by Edmonds in 1965. The fastest known algorithm for the weighted matching problem has a running time of O(nm + n² log n). Many real world problems require graphs of such large size that this running time is too costly. Therefore, there is considerable need for faster approximation algorithms for the weighted matching problem. We present a lineartime approximation algorithm for the weighted matching problem with a performance ratio arbitrarily close to 2/1. This improves the previously best performance ratio of 3/2. Our algorithm is not only of theoretical interest, but because it is easy to implement and the constants involved are quite small it is also useful in practice.