Results 1  10
of
334
On the (im)possibility of obfuscating programs
 Lecture Notes in Computer Science
, 2001
"... Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic an ..."
Abstract

Cited by 341 (24 self)
 Add to MetaCart
Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexitytheoretic applications, ranging from software protection to homomorphic encryption to complexitytheoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility ” condition in obfuscation as meaning that O(P) is a “virtual black box, ” in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient programs P that are unobfuscatable in the sense that (a) given any efficient program P ′ that computes the same function as a program P ∈ P, the “source code ” P can be efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P ∈ P, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC 0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families.
An InformationTheoretic Model for Steganography
, 1998
"... An informationtheoretic model for steganography with passive adversaries is proposed. The adversary's task of distinguishing between an innocentcover message C and a modified message S containing a secret part is interpreted as a hypothesis testing problem. The security of a steganographic sys ..."
Abstract

Cited by 268 (3 self)
 Add to MetaCart
(Show Context)
An informationtheoretic model for steganography with passive adversaries is proposed. The adversary's task of distinguishing between an innocentcover message C and a modified message S containing a secret part is interpreted as a hypothesis testing problem. The security of a steganographic system is quantified in terms of the relative entropy (or discrimination) between PC and PS . Several secure steganographic schemes are presented in this model; one of them is a universal information hiding scheme based on universal data compression techniques that requires no knowledge of the covertext statistics.
Sequences of Games: A Tool for Taming Complexity in Security Proofs
, 2004
"... This paper is brief tutorial on a technique for structuring security proofs as sequences games. ..."
Abstract

Cited by 169 (0 self)
 Add to MetaCart
This paper is brief tutorial on a technique for structuring security proofs as sequences games.
Privacy Preserving Keyword Searches on Remote Encrypted Data
, 2004
"... We consider the following problem: a user wants to store his files in an encrypted form on a remote file server S. ..."
Abstract

Cited by 147 (0 self)
 Add to MetaCart
(Show Context)
We consider the following problem: a user wants to store his files in an encrypted form on a remote file server S.
BlackBox Concurrent ZeroKnowledge Requires (almost) Logarithmically Many Rounds
 SIAM Journal on Computing
, 2002
"... We show that any concurrent zeroknowledge protocol for a nontrivial language (i.e., for a language outside BPP), whose security is proven via blackbox simulation, must use at least ~ \Omega\Gamma/10 n) rounds of interaction. This result achieves a substantial improvement over previous lower bound ..."
Abstract

Cited by 111 (9 self)
 Add to MetaCart
We show that any concurrent zeroknowledge protocol for a nontrivial language (i.e., for a language outside BPP), whose security is proven via blackbox simulation, must use at least ~ \Omega\Gamma/10 n) rounds of interaction. This result achieves a substantial improvement over previous lower bounds, and is the first bound to rule out the possibility of constantround concurrent zeroknowledge when proven via blackbox simulation. Furthermore, the bound is polynomially related to the number of rounds in the best known concurrent zeroknowledge protocol for languages in NP (which is established via blackbox simulation).
SIGMA: the ‘SIGnandMAc’ Approach to Authenticated DiffieHellman and its
 Use in the IKE Protocols”, full version. http://www.ee.technion.ac.il/~hugo/sigma.html
"... Abstract. We present the SIGMA family of keyexchange protocols and the “SIGnandMAc ” approach to authenticated DiffieHellman underlying its design. The SIGMA protocols provide perfect forward secrecy via a DiffieHellman exchange authenticated with digital signatures, and are specifically design ..."
Abstract

Cited by 100 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We present the SIGMA family of keyexchange protocols and the “SIGnandMAc ” approach to authenticated DiffieHellman underlying its design. The SIGMA protocols provide perfect forward secrecy via a DiffieHellman exchange authenticated with digital signatures, and are specifically designed to ensure sound cryptographic key exchange while providing a variety of features and tradeoffs required in practical scenarios (such as optional identity protection and reduced number of protocol rounds). As a consequence, the SIGMA protocols are very well suited for use in actual applications and for standardized key exchange. In particular, SIGMA serves as the cryptographic basis for the signaturebased modes of the standardized Internet Key Exchange (IKE) protocol (versions 1 and 2). This paper describes the design rationale behind the SIGMA approach and protocols, and points out to many subtleties surrounding the design of secure keyexchange protocols in general, and identityprotecting protocols in particular. We motivate the design of SIGMA by comparing it to other protocols, most notable the STS protocol and its variants. In particular, it is shown how SIGMA solves some of the security shortcomings found in previous protocols. 1
A Framework for PasswordBased Authenticated Key Exchange
 in Cryptology — Eurocrypt 2003, LNCS
, 2003
"... In this paper we present a general framework for passwordbased authenticated key exchange protocols, in the common reference string model. Our protocol is actually an abstraction of the key exchange protocol of Katz et al. and is based on the recently introduced notion of smooth projective hashi ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
(Show Context)
In this paper we present a general framework for passwordbased authenticated key exchange protocols, in the common reference string model. Our protocol is actually an abstraction of the key exchange protocol of Katz et al. and is based on the recently introduced notion of smooth projective hashing by Cramer and Shoup. We gain a number of benefits from this abstraction. First, we obtain a modular protocol that can be described using just three highlevel cryptographic tools. This allows a simple and intuitive understanding of its security.
A leakageresilient mode of operation
 In EUROCRYPT
, 2009
"... Abstract. A weak pseudorandom function (wPRF) is a pseudorandom functions with a relaxed security requirement, where one only requires the output to be pseudorandom when queried on random (and not adversarially chosen) inputs. We show that unlike standard PRFs, wPRFs are secure against memory attack ..."
Abstract

Cited by 79 (5 self)
 Add to MetaCart
(Show Context)
Abstract. A weak pseudorandom function (wPRF) is a pseudorandom functions with a relaxed security requirement, where one only requires the output to be pseudorandom when queried on random (and not adversarially chosen) inputs. We show that unlike standard PRFs, wPRFs are secure against memory attacks, that is they remain secure even if a bounded amount of information about the secret key is leaked to the adversary. As an application of this result we propose a simple mode of operation which – when instantiated with any wPRF – gives a leakageresilient streamcipher. Such a cipher is secure against any sidechannel attack, as long as the amount of information leaked per round is bounded, but overall can be arbitrary large. This construction is simpler than the only previous one (DziembowskiPietrzak FOCS’08) as it only uses a single primitive (a wPRF) in a straight forward manner. 1
A General Composition Theorem for Secure Reactive Systems
 In TCC 2004
, 2004
"... We consider compositional properties of reactive systems that are secure in a cryptographic sense. We follow the wellknown simulatability approach of modern cryptography, i.e., the specification is an ideal system and a real system should in some sense simulate this ideal one. We show that if a ..."
Abstract

Cited by 78 (8 self)
 Add to MetaCart
(Show Context)
We consider compositional properties of reactive systems that are secure in a cryptographic sense. We follow the wellknown simulatability approach of modern cryptography, i.e., the specification is an ideal system and a real system should in some sense simulate this ideal one. We show that if a system consists of a polynomial number of arbitrary ideal subsystems such that each of them has a secure implementation in the sense of blackbox simulatability, then one can securely replace all ideal subsystems with their respective secure counterparts without destroying the blackbox simulatability relation. We further prove our theorem for universal simulatability by showing that blackbox simulatability implies universal simulatability under reasonable assumptions. We show all our results with concrete security.