Results 1  10
of
95
On the (im)possibility of obfuscating programs
 Lecture Notes in Computer Science
, 2001
"... Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic an ..."
Abstract

Cited by 187 (10 self)
 Add to MetaCart
Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexitytheoretic applications, ranging from software protection to homomorphic encryption to complexitytheoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility ” condition in obfuscation as meaning that O(P) is a “virtual black box, ” in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient programs P that are unobfuscatable in the sense that (a) given any efficient program P ′ that computes the same function as a program P ∈ P, the “source code ” P can be efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P ∈ P, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC 0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families.
Making mix nets robust for electronic voting by randomized partial checking
 In USENIX Security Symposium
, 2002
"... Symposium ..."
Publickey cryptosystems from the worstcase shortest vector problem
, 2008
"... We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector probl ..."
Abstract

Cited by 82 (17 self)
 Add to MetaCart
We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector problem for a special class of lattices (Ajtai and Dwork, STOC 1997; Regev, J. ACM 2004), or on the conjectured hardness of lattice problems for quantum algorithms (Regev, STOC 2005). Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the “learning with errors” (LWE) problem; previously, only a quantum reduction of this kind was known. In addition, we construct new cryptosystems based on the search version of LWE, including a very natural chosen ciphertextsecure system that has a much simpler description and tighter underlying worstcase approximation factor than prior constructions.
Lossy Trapdoor Functions and Their Applications
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 80 (2007)
, 2007
"... We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of standard lattice problems. Using lossy TDFs, we ..."
Abstract

Cited by 78 (16 self)
 Add to MetaCart
We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of standard lattice problems. Using lossy TDFs, we develop a new approach for constructing many important cryptographic primitives, including standard trapdoor functions, CCAsecure cryptosystems, collisionresistant hash functions, and more. All of our constructions are simple, efficient, and blackbox. Taken all together, these results resolve some longstanding open problems in cryptography. They give the first known (injective) trapdoor functions based on problems not directly related to integer factorization, and provide the first known CCAsecure cryptosystem based solely on worstcase lattice assumptions.
Direct Chosen Ciphertext Security from IdentityBased Techniques
 In ACM Conference on Computer and Communications Security
, 2005
"... We describe a new encryption technique that is secure in the standard model against adaptive chosen ciphertext (CCA2) attacks. We base our method on two very e#cient IdentityBased Encryption (IBE) schemes without random oracles due to Boneh and Boyen, and Waters. ..."
Abstract

Cited by 70 (7 self)
 Add to MetaCart
We describe a new encryption technique that is secure in the standard model against adaptive chosen ciphertext (CCA2) attacks. We base our method on two very e#cient IdentityBased Encryption (IBE) schemes without random oracles due to Boneh and Boyen, and Waters.
Efficient noninteractive proof systems for bilinear groups
 In EUROCRYPT 2008, volume 4965 of LNCS
, 2008
"... Noninteractive zeroknowledge proofs and noninteractive witnessindistinguishable proofs have played a significant role in the theory of cryptography. However, lack of efficiency has prevented them from being used in practice. One of the roots of this inefficiency is that noninteractive zeroknow ..."
Abstract

Cited by 70 (5 self)
 Add to MetaCart
Noninteractive zeroknowledge proofs and noninteractive witnessindistinguishable proofs have played a significant role in the theory of cryptography. However, lack of efficiency has prevented them from being used in practice. One of the roots of this inefficiency is that noninteractive zeroknowledge proofs have been constructed for general NPcomplete languages such as Circuit Satisfiability, causing an expensive blowup in the size of the statement when reducing it to a circuit. The contribution of this paper is a general methodology for constructing very simple and efficient noninteractive zeroknowledge proofs and noninteractive witnessindistinguishable proofs that work directly for groups with a bilinear map, without needing a reduction to Circuit Satisfiability. Groups with bilinear maps have enjoyed tremendous success in the field of cryptography in recent years and have been used to construct a plethora of protocols. This paper provides noninteractive witnessindistinguishable proofs and noninteractive zeroknowledge proofs that can be used in connection with these protocols. Our goal is to spread the use of noninteractive cryptographic proofs from mainly theoretical purposes to the large class of practical cryptographic protocols based on bilinear groups.
NonInteractive CryptoComputing for NC1
 In 40th Annual Symposium on Foundations of Computer Science
, 1999
"... The area of "computing with encrypted data" has been studied by numerous authors in the past twenty years since it is fundamental to understanding properties of encryption and it has many practical applications. The related fundamental area of "secure function evaluation" has been studied since the ..."
Abstract

Cited by 70 (0 self)
 Add to MetaCart
The area of "computing with encrypted data" has been studied by numerous authors in the past twenty years since it is fundamental to understanding properties of encryption and it has many practical applications. The related fundamental area of "secure function evaluation" has been studied since the mid 80's. In its basic twoparty case, two parties (Alice and Bob) evaluate a known circuit over private inputs (or a private input and a private circuit). Much attention has been paid to the important issue of minimizing rounds of computation in this model. Namely, the number of communication rounds in which Alice and Bob need to engage in to evaluate a circuit on encrypted data securely. Advancements in these areas have been recognized as open problems and have remained open for a number of years. In this paper we give a one round, and thus round optimal, protocol for secure evaluation of circuits which is in polynomialtime for NC
ConstantRound CoinTossing With a Man in the Middle or Realizing the Shared Random String Model
 In 43rd FOCS
, 2002
"... We construct the first constantround nonmalleable commitment scheme and the first constantround nonmalleable zeroknowledge argument system, as defined by Dolev, Dwork and Naor. Previous constructions either used a nonconstant number of rounds, or were only secure under stronger setup assumption ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
We construct the first constantround nonmalleable commitment scheme and the first constantround nonmalleable zeroknowledge argument system, as defined by Dolev, Dwork and Naor. Previous constructions either used a nonconstant number of rounds, or were only secure under stronger setup assumptions. An example of such an assumption is the shared random string model where we assume all parties have access to a reference string that was chosen uniformly at random by a trusted dealer. We obtain these results by defining an adequate notion of nonmalleable cointossing, and presenting a constantround protocol that satisfies it. This protocol allows us to transform protocols that are nonmalleable in (a modified notion of) the shared random string model into protocols that are nonmalleable in the plain model (without any trusted dealer or setup assumptions). Observing that known constructions of a noninteractive nonmalleable zeroknowledge argument systems in the shared random string model are in fact nonmalleable in the modified model, and combining them with our cointossing protocol we obtain the results mentioned above. The techniques we use are different from those used in previous constructions of nonmalleable protocols. In particular our protocol uses diagonalization and a nonblackbox proof of security (in a sense similar to Barak’s zeroknowledge argument).
Efficient Mutual Data Authentication Using Manually Authenticated Strings. Cryptology ePrint Archive, Report 2005/424
, 2005
"... Abstract. Solutions for an easy and secure setup of a wireless connection between two devices are urgently needed for WLAN, Wireless USB, Bluetooth and similar standards for short range wireless communication. All such key exchange protocols employ data authentication as an unavoidable subtask. As a ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
Abstract. Solutions for an easy and secure setup of a wireless connection between two devices are urgently needed for WLAN, Wireless USB, Bluetooth and similar standards for short range wireless communication. All such key exchange protocols employ data authentication as an unavoidable subtask. As a solution, we propose an asymptotically optimal protocol family for data authentication that uses short manually authenticated outofband messages. Compared to previous articles by Vaudenay and Pasini the results of this paper are more general and based on weaker security assumptions. In addition to providing security proofs for our protocols, we focus also on implementation details and propose practically secure and efficient subprimitives for applications. 1
Reliable MIX Cascade Networks through Reputation
 Financial Cryptography. SpringerVerlag, LNCS 2357
, 2002
"... We describe a MIX cascade protocol and a reputation system that together increase the reliability of a network of MIX cascades. In our protocol, MIX nodes periodically generate a communally random seed that, along with their reputations, determines cascade configuration. ..."
Abstract

Cited by 58 (17 self)
 Add to MetaCart
We describe a MIX cascade protocol and a reputation system that together increase the reliability of a network of MIX cascades. In our protocol, MIX nodes periodically generate a communally random seed that, along with their reputations, determines cascade configuration.