Results 1  10
of
28
The NPcompleteness column: an ongoing guide
 Journal of Algorithms
, 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & ..."
Abstract

Cited by 218 (0 self)
 Add to MetaCart
(Show Context)
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
Quantum complexities of ordered searching, sorting, and element distinctness
, 2001
"... ..."
(Show Context)
Optimal and Efficient Clock Synchronization Under Drifting Clocks (Extended Abstract)
 IN PROCEEDINGS OF THE 18TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING
, 1999
"... We consider the classical problem of clock synchronization in distributed systems. Previously, this problem was solved optimally and efficiently only in the case when all individual clocks are nondrifting, i.e., only for systems where all clocks advance at the rate of real time. In this paper, we ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
We consider the classical problem of clock synchronization in distributed systems. Previously, this problem was solved optimally and efficiently only in the case when all individual clocks are nondrifting, i.e., only for systems where all clocks advance at the rate of real time. In this paper, we present a new algorithm for systems with drifting clocks, which is the first optimal algorithm to solve the problem efficiently: clock drift bounds and message latency bounds may be arbitrary; the computational complexity depends on the communication pattern of the system in a way which is bounded by a polynomial in the network size for most systems. More specifically, the complexity is polynomial in the maximal number of messages known to be sent but not received, the relative system speed, and timestamp s...
A Lower Bound for Parallel String Matching
 SIAM J. Comput
, 1993
"... This talk presents the derivation of an\Omega\Gamma/28 log m) lower bound on the number of rounds necessary for finding occurrences of a pattern string P [1::m] in a text string T [1::2m] in parallel using m comparisons in each round. The parallel complexity of the string matching problem using p ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
This talk presents the derivation of an\Omega\Gamma/28 log m) lower bound on the number of rounds necessary for finding occurrences of a pattern string P [1::m] in a text string T [1::2m] in parallel using m comparisons in each round. The parallel complexity of the string matching problem using p processors for general alphabets follows. 1. Introduction Better and better parallel algorithms have been designed for stringmatching. All are on CRCWPRAM with the weakest form of simultaneous write conflict resolution: all processors which write into the same memory location must write the same value of 1. The best CREWPRAM algorithms are those obtained from the CRCW algorithms for a logarithmic loss of efficiency. Optimal algorithms have been designed: O(logm) time in [8, 17] and O(log log m) time in [4]. (An optimal algorithm is one with pt = O(n) where t is the time and p is the number of processors used.) Recently, Vishkin [18] developed an optimal O(log m) time algorithm. Unlike...
Two lower bounds for branching programs
, 1986
"... The first result concerns branching programs having width (log n) °{*). We give an fl(n log n ~ log log n) lower bound for the size of such branching programs computing almost any symmetric Boolean function and in particular the following explicit function: "the sum of the input variables is a ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
The first result concerns branching programs having width (log n) °{*). We give an fl(n log n ~ log log n) lower bound for the size of such branching programs computing almost any symmetric Boolean function and in particular the following explicit function: "the sum of the input variables is a quadratic residue mod p " where p is any given prime between n 1/4 and n 1/3. This is a strengthening of previous nonlinear lower bounds obtained by Chandra, Furst, Lipton and by Pudlak. We mention that by iterating our method the result can be further strengthened to lfl(nlog n). The second result is a C " lower bound for readonceonly branching programs computing an explicit Boolean function. For n = (~), the function computes the parity of the number of triangles in a graph on v vertices. This improves previous exp(cx/n) lower bounds for other graph functions by Wegener and Z£k. The result implies a linear lower bound for the space complexity of this Boolean function on "eraser machines", i.e. machines that erase each input bit immediately after having read it.
A timespace tradeoff for element distinctness
 SIAM Journal on Computing
, 1987
"... Abstract. In A time space tradeoff for sorting on nonoblivious machines, Borodin et al. [J. Comput. System Sci., 22 (1981), pp. 351364] proved that to sort n elements requires TS =fl(n 2) where T=time and S space on a comparison based branching program. Although element distinctness and sorting ar ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract. In A time space tradeoff for sorting on nonoblivious machines, Borodin et al. [J. Comput. System Sci., 22 (1981), pp. 351364] proved that to sort n elements requires TS =fl(n 2) where T=time and S space on a comparison based branching program. Although element distinctness and sorting are equivalent problems on a computation tree, the stated tradeoff result does not immediately follow for element distinctness or indeed for any decision problem. In this paper, we are able to show that TS fl(n3/2vg n) for deciding element distinctness (or the sign of a permutation). Key words, timespace tradeoffs, computational complexity, time lower bounds, space lower bounds AMS(MOS) subject classification. 68Q25 1. Introduction. Timespace
Optimal TimeSpace TradeOffs for Sorting
 IN PROC. 39TH IEEE SYMPOS. FOUND. COMPUT. SCI
, 1998
"... We study the fundamental problem of sorting in a sequential model of computation and in particular consider the timespace tradeoff (product of time and space) for this problem. Beame has ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We study the fundamental problem of sorting in a sequential model of computation and in particular consider the timespace tradeoff (product of time and space) for this problem. Beame has
Communicationspace tradeoffs for unrestricted protocols
 SIAM Journal on Computing
, 1994
"... This paper introduces communicating branching programs, and develops a general technique for demonstrating communicationspace tradeoffs for pairs of communicating branching programs. This technique is then used to prove communicationspace tradeoffs for any pair of communicating branching programs ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper introduces communicating branching programs, and develops a general technique for demonstrating communicationspace tradeoffs for pairs of communicating branching programs. This technique is then used to prove communicationspace tradeoffs for any pair of communicating branching programs that hashes according to a universal family of hash functions. Other tradeoffs follow from this result. As an example, any pair of communicating Boolean branching programs that computes matrixvector products over GF(2) requires communicationspace product Ω(n 2), provided the space used is o(n / log n). These are the first examples of communicationspace tradeoffs on a completely general model of communicating processes.
Efficient String Algorithmics
, 1992
"... Problems involving strings arise in many areas of computer science and have numerous practical applications. We consider several problems from a theoretical perspective and provide efficient algorithms and lower bounds for these problems in sequential and parallel models of computation. In the sequ ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Problems involving strings arise in many areas of computer science and have numerous practical applications. We consider several problems from a theoretical perspective and provide efficient algorithms and lower bounds for these problems in sequential and parallel models of computation. In the sequential setting, we present new algorithms for the string matching problem improving the previous bounds on the number of comparisons performed by such algorithms. In parallel computation, we present tight algorithms and lower bounds for the string matching problem, for finding the periods of a string, for detecting squares and for finding initial palindromes.
Saving Comparisons in the CrochemorePerrin String Matching Algorithm
 IN PROC. OF 1ST EUROPEAN SYMP. ON ALGORITHMS
, 1992
"... Crochemore and Perrin discovered an elegant lineartime constantspace string matching algorithm that makes at most 2n \Gamma m symbol comparison. This paper shows how to modify their algorithm to use fewer comparisons. Given any fixed ffl ? 0, the modified algorithm takes linear time, uses constant ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Crochemore and Perrin discovered an elegant lineartime constantspace string matching algorithm that makes at most 2n \Gamma m symbol comparison. This paper shows how to modify their algorithm to use fewer comparisons. Given any fixed ffl ? 0, the modified algorithm takes linear time, uses constant space and makes at most n+ b 1+ffl 2 (n \Gamma m)c comparisons. If O(log m) space is available, then the algorithm makes at most n + b 1 2 (n \Gamma m)c comparisons. The pattern preprocessing step also takes linear time and uses constant space. These are the first string matching algorithms that make fewer than 2n \Gamma m comparisons and use sublinear space.