Results 1  10
of
139
PseudoRandom Generation from OneWay Functions
 PROC. 20TH STOC
, 1988
"... Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a oneway function from a pseudorandom generator, this result shows that there is a pseudorandom gene ..."
Abstract

Cited by 725 (21 self)
 Add to MetaCart
Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a oneway function from a pseudorandom generator, this result shows that there is a pseudorandom generator iff there is a oneway function.
Proof verification and hardness of approximation problems
 In Proc. 33rd Ann. IEEE Symp. on Found. of Comp. Sci
, 1992
"... We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probabilit ..."
Abstract

Cited by 718 (45 self)
 Add to MetaCart
We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof " with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [6] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). As a consequence we prove that no MAX SNPhard problem has a polynomial time approximation scheme, unless NP=P. The class MAX SNP was defined by Papadimitriou and Yannakakis [82] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige, Goldwasser, Lovász, Safra and Szegedy [42], and Arora and Safra [6] and shows that there exists a positive ɛ such that approximating the maximum clique size in an Nvertex graph to within a factor of N ɛ is NPhard. 1
Simple Constructions of Almost kwise Independent Random Variables
, 1992
"... We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the dist ..."
Abstract

Cited by 270 (41 self)
 Add to MetaCart
We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ɛ < 1/(k log n)). An additional advantage of our constructions is their simplicity.
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is ..."
Abstract

Cited by 258 (15 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
Randomness is Linear in Space
 Journal of Computer and System Sciences
, 1993
"... We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts ..."
Abstract

Cited by 229 (20 self)
 Add to MetaCart
We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space S and time T poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits. 1
Generalized Privacy Amplification
 IEEE Transactions on Information Theory
, 1995
"... This paper provides a general treatment of privacy amplification by public discussion, a concept introduced by Bennett, Brassard and Robert [1] for a special scenario. The results have applications to unconditionallysecure secretkey agreement protocols, quantum cryptography and to a nonasymptotic ..."
Abstract

Cited by 215 (18 self)
 Add to MetaCart
This paper provides a general treatment of privacy amplification by public discussion, a concept introduced by Bennett, Brassard and Robert [1] for a special scenario. The results have applications to unconditionallysecure secretkey agreement protocols, quantum cryptography and to a nonasymptotic and constructive treatment of the secrecy capacity of wiretap and broadcast channels, even for a considerably strengthened definition of secrecy capacity. I. Introduction This paper is concerned with unconditionallysecure secretkey agreement by two communicating parties Alice and Bob who both know a random variable W, for instance a random nbit string, about which an eavesdropper Eve has incomplete information characterized by the random variable V jointly distributed with W according to PV W . This distribution may partially be under Eve's control. Alice and Bob know nothing about PV W , except that it satisfies a certain constraint. We present protocols by which Alice and Bob can us...
Free Bits, PCPs and NonApproximability  Towards Tight Results
, 1996
"... This paper continues the investigation of the connection between proof systems and approximation. The emphasis is on proving tight nonapproximability results via consideration of measures like the "free bit complexity" and the "amortized free bit complexity" of proof systems. ..."
Abstract

Cited by 208 (40 self)
 Add to MetaCart
This paper continues the investigation of the connection between proof systems and approximation. The emphasis is on proving tight nonapproximability results via consideration of measures like the "free bit complexity" and the "amortized free bit complexity" of proof systems.
On Lattices, Learning with Errors, Random Linear Codes, and Cryptography
 In STOC
, 2005
"... Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear co ..."
Abstract

Cited by 197 (3 self)
 Add to MetaCart
Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical. We also present a (classical) publickey cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worstcase quantum hardness of SVP and SIVP. Previous latticebased publickey cryptosystems such as the one by Ajtai and Dwork were based only on uniqueSVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size Õ(n2) and encrypting a message increases its size by a factor of Õ(n) (in previous cryptosystems these values are Õ(n4) and Õ(n2), respectively). In fact, under the assumption that all parties share a random bit string of length Õ(n2), the size of the public key can be reduced to Õ(n). 1
Experimental Quantum Cryptography
 Journal of Cryptology
, 1992
"... We describe results from an apparatus and protocol designed to implement quantum key distribution, by which two users, who share no secret information initially: 1) exchange a random quantum transmission, consisting of very faint flashes of polarized light; 2) by subsequent public discussion of the ..."
Abstract

Cited by 195 (20 self)
 Add to MetaCart
We describe results from an apparatus and protocol designed to implement quantum key distribution, by which two users, who share no secret information initially: 1) exchange a random quantum transmission, consisting of very faint flashes of polarized light; 2) by subsequent public discussion of the sent and received versions of this transmission estimate the extent of eavesdropping that might have taken place on it, and finally 3) if this estimate is small enough, distill from the sent and received versions a smaller body of shared random information, which is certifiably secret in the sense that any third party's expected information on it is an exponentially small fraction of one bit. Because the system depends on the uncertainty principle of quantum physics, instead of usual mathematical assumptions such as the difficulty of factoring, it remains secure against an adversary with unlimited computing power. A preliminary version of this paper was presented at Eurocrypt '90, May 21 ...
Design and Analysis of Practical PublicKey Encryption Schemes Secure against Adaptive Chosen Ciphertext Attack
 SIAM Journal on Computing
, 2001
"... A new public key encryption scheme, along with several variants, is proposed and analyzed. The scheme and its variants are quite practical, and are proved secure against adaptive chosen ciphertext attack under standard intractability assumptions. These appear to be the first publickey encryption sc ..."
Abstract

Cited by 189 (11 self)
 Add to MetaCart
A new public key encryption scheme, along with several variants, is proposed and analyzed. The scheme and its variants are quite practical, and are proved secure against adaptive chosen ciphertext attack under standard intractability assumptions. These appear to be the first publickey encryption schemes in the literature that are simultaneously practical and provably secure.