Results 1  10
of
83
Entity Authentication and Key Distribution
, 1993
"... Entity authentication and key distribution are central cryptographic problems in distributed computing  but up until now, they have lacked even a meaningful definition. One consequence is that incorrect and inefficient protocols have proliferated. This paper provides the first treatment of these p ..."
Abstract

Cited by 461 (12 self)
 Add to MetaCart
Entity authentication and key distribution are central cryptographic problems in distributed computing  but up until now, they have lacked even a meaningful definition. One consequence is that incorrect and inefficient protocols have proliferated. This paper provides the first treatment of these problems in the complexitytheoretic framework of modern cryptography. Addressed in detail are two problems of the symmetric, twoparty setting: mutual authentication and authenticated key exchange. For each we present a definition, protocol, and proof that the protocol meets its goal, assuming the (minimal) assumption of pseudorandom function. When this assumption is appropriately instantiated, the protocols given are practical and efficient.
On the (im)possibility of obfuscating programs
 Lecture Notes in Computer Science
, 2001
"... Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic an ..."
Abstract

Cited by 187 (10 self)
 Add to MetaCart
Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexitytheoretic applications, ranging from software protection to homomorphic encryption to complexitytheoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility ” condition in obfuscation as meaning that O(P) is a “virtual black box, ” in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient programs P that are unobfuscatable in the sense that (a) given any efficient program P ′ that computes the same function as a program P ∈ P, the “source code ” P can be efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P ∈ P, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC 0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families.
Limits on the Provable Consequences of Oneway Permutations
, 1989
"... We present strong evidence that the implication, "if oneway permutations exist, then secure secret key agreement is possible" is not provable by standard techniques. Since both sides of this implication are widely believed true in real life, to show that the implication is false requires a new m ..."
Abstract

Cited by 162 (0 self)
 Add to MetaCart
We present strong evidence that the implication, "if oneway permutations exist, then secure secret key agreement is possible" is not provable by standard techniques. Since both sides of this implication are widely believed true in real life, to show that the implication is false requires a new model. We consider a world where dl parties have access to a black box or a randomly selected permutation. Being totally random, this permutation will be strongly oneway in provable, informationthevretic way. We show that, if P = NP, no protocol for secret key agreement is secure in such setting. Thus, to prove that a secret key greement protocol which uses a oneway permutation as a black box is secure is as hrd as proving F NP. We also obtain, as corollary, that there is an oracle relative to which the implication is false, i.e., there is a oneway permutation, yet secretexchange is impossible. Thus, no technique which relativizes can prove that secret exchange can be based on any oneway permutation. Our results present a general framework for proving statements of the form, "Cryptographic application X is not likely possible based solely on complexity assumption Y." 1
Resettable ZeroKnowledge
 In 32nd STOC
, 1999
"... We introduce the notion of Resettable ZeroKnowledge (rZK), a new security measure for cryptographic protocols which strengthens the classical notion of zeroknowledge. In essence, an rZK protocol is one that remains zero knowledge even if an adversary can interact with the prover many times, eac ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
We introduce the notion of Resettable ZeroKnowledge (rZK), a new security measure for cryptographic protocols which strengthens the classical notion of zeroknowledge. In essence, an rZK protocol is one that remains zero knowledge even if an adversary can interact with the prover many times, each time resetting the prover to its initial state and forcing him to use the same random tape.
Notions of Reducibility between Cryptographic Primitives
, 2004
"... Starting with the seminal paper of Impagliazzo and Rudich [18], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via "blackbox" reductions. ..."
Abstract

Cited by 63 (7 self)
 Add to MetaCart
Starting with the seminal paper of Impagliazzo and Rudich [18], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via "blackbox" reductions.
Tiny Families of Functions with Random Properties: A QualitySize Tradeoff for Hashing
, 2003
"... We present three explicit constructions of hash functions, which exhibit a tradeo# between the size of the family (and hence the number of random bits needed to generate a member of the family), and the quality (or error parameter) of the pseudorandom property it achieves. Unlike previous const ..."
Abstract

Cited by 52 (11 self)
 Add to MetaCart
We present three explicit constructions of hash functions, which exhibit a tradeo# between the size of the family (and hence the number of random bits needed to generate a member of the family), and the quality (or error parameter) of the pseudorandom property it achieves. Unlike previous constructions, most notably universal hashing, the size of our families is essentially independent of the size of the domain on which the functions operate.
Boundedconcurrent secure twoparty computation without setup assumptions
 STOC 2003
, 2003
"... ..."
Provably Secure Steganography
 in Advances in Cryptology: CRYPTO 2002
, 2002
"... Informally, steganography is the process of sending a secret message from Alice to Bob in such a way that an eavesdropper (who listens to all communications) cannot even tell that a secret message is being sent. In this work, we initiate the study of steganography from a complexitytheoretic point o ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
Informally, steganography is the process of sending a secret message from Alice to Bob in such a way that an eavesdropper (who listens to all communications) cannot even tell that a secret message is being sent. In this work, we initiate the study of steganography from a complexitytheoretic point of view. We introduce definitions based on computational indistinguishability and we prove that the existence of oneway functions implies the existence of secure steganographic protocols. Keywords: Steganography, Cryptography, Provable Security 1
Keyword search and oblivious pseudorandom functions
, 2005
"... We study the problem of privacypreserving access to a database. Particularly, we consider the problem of privacypreserving keyword search (KS), where records in the database are accessed according to their associated keywords and where we care for the privacy of both the client and the server. W ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
We study the problem of privacypreserving access to a database. Particularly, we consider the problem of privacypreserving keyword search (KS), where records in the database are accessed according to their associated keywords and where we care for the privacy of both the client and the server. We provide efficient solutions for various settings of KS, based either on specific assumptions or on general primitives (mainly oblivious transfer). Our general solutions rely on a new connection between KS and the oblivious evaluation of pseudorandom functions (OPRFs). We therefore study both the definition and construction of OPRFs and, as a corollary, give improved constructions of OPRFs that may be of independent interest.
The complexity of online memory checking
 In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
, 2005
"... We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and t ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and the required fingerprint size is well understood. We study the problem of sublinear authentication: suppose the user would like to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file. If the user only wants to read q bits of the file, how large does the size s of the private fingerprint need to be? We define this problem formally, and show a tight lower bound on the relationship between s and q when the adversary is not computationally bounded, namely: s × q = Ω(n), where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum et al. in 1991, and hence the same (tight) lower bound applies also to that problem. It was previously shown that when the adversary is computationally bounded, under the assumption that oneway functions exist, it is possible to construct much better online memory checkers. T he same is also true for sublinear authentication schemes. We show that the existence of oneway functions is also a necessary condition: even slightly breaking the s × q = Ω(n) lower bound in a computational setting implies the existence of oneway functions. 1