Results 1  10
of
61
Programming Parallel Algorithms
, 1996
"... In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a th ..."
Abstract

Cited by 193 (9 self)
 Add to MetaCart
In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a theoretical framework, many are quite efficient in practice or have key ideas that have been used in efficient implementations. This research on parallel algorithms has not only improved our general understanding ofparallelism but in several cases has led to improvements in sequential algorithms. Unf:ortunately there has been less success in developing good languages f:or prograftlftling parallel algorithftls, particularly languages that are well suited for teaching and prototyping algorithms. There has been a large gap between languages
The NPcompleteness column: an ongoing guide
 Journal of Algorithms
, 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co ..."
Abstract

Cited by 188 (0 self)
 Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
On Hiding Information from an Oracle
, 1989
"... : We consider the problem of computing with encrypted data. Player A wishes to know the value f(x) for some x but lacks the power to compute it. Player B has the power to compute f and is willing to send f(y) to A if she sends him y, for any y. Informally, an encryption scheme for the problem f is a ..."
Abstract

Cited by 129 (15 self)
 Add to MetaCart
: We consider the problem of computing with encrypted data. Player A wishes to know the value f(x) for some x but lacks the power to compute it. Player B has the power to compute f and is willing to send f(y) to A if she sends him y, for any y. Informally, an encryption scheme for the problem f is a method by which A, using her inferior resources, can transform the cleartext instance x into an encrypted instance y, obtain f(y) from B, and infer f(x) from f(y) in such a way that B cannot infer x from y. When such an encryption scheme exists, we say that f is encryptable. The framework defined in this paper enables us to prove precise statements about what an encrypted instance hides and what it leaks, in an informationtheoretic sense. Our definitions are cast in the language of probability theory and do not involve assumptions such as the intractability of factoring or the existence of oneway functions. We use our framework to describe encryption schemes for some wellknown function...
Some Connections between Bounded Query Classes and NonUniform Complexity
 In Proceedings of the 5th Structure in Complexity Theory Conference
, 1990
"... This paper is dedicated to the memory of Ronald V. Book, 19371997. ..."
Abstract

Cited by 71 (23 self)
 Add to MetaCart
This paper is dedicated to the memory of Ronald V. Book, 19371997.
New Collapse Consequences Of NP Having Small Circuits
, 1995
"... . We show that if a selfreducible set has polynomialsize circuits, then it is low for the probabilistic class ZPP(NP). As a consequence we get a deeper collapse of the polynomialtime hierarchy PH to ZPP(NP) under the assumption that NP has polynomialsize circuits. This improves on the wellknown ..."
Abstract

Cited by 57 (8 self)
 Add to MetaCart
. We show that if a selfreducible set has polynomialsize circuits, then it is low for the probabilistic class ZPP(NP). As a consequence we get a deeper collapse of the polynomialtime hierarchy PH to ZPP(NP) under the assumption that NP has polynomialsize circuits. This improves on the wellknown result of Karp, Lipton, and Sipser (1980) stating a collapse of PH to its second level \Sigma P 2 under the same assumption. As a further consequence, we derive new collapse consequences under the assumption that complexity classes like UP, FewP, and C=P have polynomialsize circuits. Finally, we investigate the circuitsize complexity of several language classes. In particular, we show that for every fixed polynomial s, there is a set in ZPP(NP) which does not have O(s(n))size circuits. Key words. polynomialsize circuits, advice classes, lowness, randomized computation AMS subject classifications. 03D10, 03D15, 68Q10, 68Q15 1. Introduction. The question of whether intractable sets ca...
Simulating Boolean Circuits on a DNA Computer
, 1997
"... We demonstrate that DNA computers can simulate Boolean circuits with a small overhead. Boolean circuits embody the notion of massively parallel signal processing and are jrequen,tly encountered in many parallel algorithms. Many important problems such as sorting, integer arithmetic, and matrix mult ..."
Abstract

Cited by 55 (9 self)
 Add to MetaCart
We demonstrate that DNA computers can simulate Boolean circuits with a small overhead. Boolean circuits embody the notion of massively parallel signal processing and are jrequen,tly encountered in many parallel algorithms. Many important problems such as sorting, integer arithmetic, and matrix multiplication are known to be computable by small size Boolean circuits much faster than by ordinary sequential digital computers. This paper shows that DNA chemistry allows one to simulate large semiunbounded janin Boolean circuits with a logarithmic slowdown in computation time. Also, for the class NC¹, the slowdown can be reduced to a constant. In this algorathm we have encoded the inputs, the Boolean AND gates, and the OR gates to DNA oligonucleotide sequences. We operate on the gates and the inputs by standard molecular techniques of sequencespecific annealing, ligation, separation by size, amplification, sequencespecific cleavage, and detection by size. Additional steps of amplification are not necessary for NC¹ circuits. Preliminary biochemical experiments on a small test circuit have produced encouraging results. Further confirmatory experiments are in progress.
On PolynomialTime Bounded TruthTable Reducibility of NP Sets to Sparse Sets
, 1991
"... We prove that if P ≠ NP, then there exists a set in NP that is not polynomial time bounded truthtable reducible (in short, p btt reducible) to any sparse set. In other words, we prove that no sparse p btt hard set exists for NP unless P = NP. By using the technique proving this result, we in ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
We prove that if P ≠ NP, then there exists a set in NP that is not polynomial time bounded truthtable reducible (in short, p btt reducible) to any sparse set. In other words, we prove that no sparse p btt hard set exists for NP unless P = NP. By using the technique proving this result, we investigate intractability of several number theoretic decision problems, i.e., decision problems defined naturally from number theoretic problems. We show that for these number theoretic decision problems, if it is not in P, then it is not p btt reducible to any sparse set.
TimeSpace Tradeoffs for Branching Programs
, 1999
"... We obtain the first nontrivial timespace tradeoff lower bound for functions f : {0, 1}^n → {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1 + ε)n, for some constant ε > 0 ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
We obtain the first nontrivial timespace tradeoff lower bound for functions f : {0, 1}^n → {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1 + ε)n, for some constant ε > 0. We also give the first separation result between the syntactic and semantic readk models [BRS93] for k > 1 by showing that polynomialsize semantic readtwice branching programs can compute functions that require exponential size on any syntactic readk branching program. We also show...
Fast parallel matrix and gcd computations
 In Proc. of the 23rd Annual Symposium on Foundations of Computer Science (FOCS’82
, 1982
"... Parallel algorithms to compute the determinant and characteristic polynomial of matrices and the gcd of polynomials are presented. The rank of matrices and solutions of arbitrary systems of linear equations are computed by parallel Las Vegas algorithms. All algorithms work over arbitrary fields. The ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Parallel algorithms to compute the determinant and characteristic polynomial of matrices and the gcd of polynomials are presented. The rank of matrices and solutions of arbitrary systems of linear equations are computed by parallel Las Vegas algorithms. All algorithms work over arbitrary fields. They run in parallel time O(log ~ n) (where n is the number of inputs) and use a polynomial number of processors. 1.
Horizons of Parallel Computation
 JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
, 1993
"... This paper considers the ultimate impact of fundamental physical limitationsnotably, speed of light and device sizeon parallel computing machines. Although we fully expect an innovative and very gradual evolution to the limiting situation, we take here the provocative view of exploring the ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
This paper considers the ultimate impact of fundamental physical limitationsnotably, speed of light and device sizeon parallel computing machines. Although we fully expect an innovative and very gradual evolution to the limiting situation, we take here the provocative view of exploring the consequences of the accomplished attainment of the physical bounds. The main result is that scalability holds only for neighborly interconnections, such as the square mesh, of boundedsize synchronous modules, presumably of the areauniversal type. We also discuss the ultimate infeasibility of latencyhiding, the violation of intuitive maximal speedups, and the emerging novel processortime tradeoffs.