Results 1 
6 of
6
Zero knowledge in the random oracle model, revisited
 In: Advances in Cryptology — Asiacrypt 2009. LNCS
, 2009
"... Abstract. We revisit previous formulations of zero knowledge in the random oracle model due to Bellare and Rogaway (CCS ’93) and Pass (Crypto ’03), and present a hierarchy for zero knowledge that includes both of these formulations. The hierarchy relates to the programmability of the random oracle, ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. We revisit previous formulations of zero knowledge in the random oracle model due to Bellare and Rogaway (CCS ’93) and Pass (Crypto ’03), and present a hierarchy for zero knowledge that includes both of these formulations. The hierarchy relates to the programmability of the random oracle, previously studied by Nielsen (Crypto ’02). – We establish a subtle separation between the BellareRogaway formulation and a weaker formulation, which yields a finer distinction than the separation in Nielsen’s work. – We show that zeroknowledge according to each of these formulations is not preserved under sequential composition. We introduce stronger definitions wherein the adversary may receive auxiliary input that depends on the random oracle (as in Unruh (Crypto ’07)) and establish closure under sequential composition for these definitions. We also present roundoptimal protocols for NP satisfying the stronger requirements. – Motivated by our study of zero knowledge, we introduce a new definition of proof of knowledge in the random oracle model that accounts for oracledependent auxiliary input. We show that two rounds of interaction are necessary and sufficient to achieve zeroknowledge proofs of knowledge according to this new definition, whereas one round of interaction is sufficient in previous definitions. – Extending our work on zero knowledge, we present a hierarchy for circuit obfuscation in the random oracle model, the weakest being that achieved in the work of Lynn, Prabhakaran and Sahai (Eurocrypt ’04). We show that the stronger notions capture precisely the class of circuits that is efficiently and exactly learnable under membership queries.
Security of Encryption Schemes in Weakened Random Oracle Models
, 2010
"... Liskov proposed several weakened versions of the random oracle model, called weakened random oracle models (WROMs), to capture the vulnerability of ideal compression functions, which are expected to have the standard security of hash functions, i.e., collision resistance, secondpreimage resistance, ..."
Abstract
 Add to MetaCart
Liskov proposed several weakened versions of the random oracle model, called weakened random oracle models (WROMs), to capture the vulnerability of ideal compression functions, which are expected to have the standard security of hash functions, i.e., collision resistance, secondpreimage resistance, and onewayness properties. The WROMs offer additional oracles to break such properties of the random oracle. In this paper, we investigate whether publickey encryption schemes in the random oracle model essentially require the standard security of hash functions by the WROMs. In particular, we deal with four WROMs associated with the standard security of hash functions; the standard, collision tractable, secondpreimage tractable, firstpreimage tractable ones (ROM, CTROM, SPTROM, and FPTROM, respectively), done by Numayama et al. for digital signature schemes in the WROMs. We obtain the following results: (1) The OAEP is secure in all the four models. (2) The encryption schemes obtained by the FujisakiOkamoto conversion (FO) are secure in the SPTROM. However, some encryption schemes with FO are insecure in the FPTROM. (3) We consider two artificial variants wFO and dFO of FO for separation of the WROMs in the context of encryption schemes. The encryption schemes with wFO (dFO, respectively) are secure in
Counterexamples to Hardness Amplification Beyond Negligible
, 2012
"... If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated on ..."
Abstract
 Add to MetaCart
If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifieshardnessisoftenhighlynontrivial,andinsomecasesmaybefalse. Forexample, it is known that the direct product (i.e. “parallel repetition”) of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as oneway functions/relations, weaklyverifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak oneway function that no polytime attacker can break with probability> 1, then the direct product provably amplifies hardness to some negligible probability. 2 Naturally, we would expect that we can amplify hardness exponentially, all the way to 2−n probability, or at least to some fixed/known negligible such as n−logn in the security parameter n, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via blackbox reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC ’95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and oneway functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function ε(n), we instantiate these primitives so that the direct product can always be broken with probability ε(n), no matter how many copies we take. 1
On the Power of Nonuniformity in Proofs of Security ABSTRACT
"... Nonuniform proofs of security are common in cryptography, but traditional blackbox separations consider only uniform security reductions. In this paper, we initiate a formal study of the power and limits of nonuniform blackbox proofs of security. We first show that a known protocol (based on the e ..."
Abstract
 Add to MetaCart
Nonuniform proofs of security are common in cryptography, but traditional blackbox separations consider only uniform security reductions. In this paper, we initiate a formal study of the power and limits of nonuniform blackbox proofs of security. We first show that a known protocol (based on the existence of oneway permutations) that uses a nonuniform proof of security, and it cannot be proven secure through a uniform security reduction. Therefore, nonuniform proofs of security are indeed provably more powerful than uniform ones. We complement this result by showing that many known blackbox separations in the uniform regime actually do extend to the nonuniform regime. We prove our results by providing general techniques for extending certain types of blackbox separations to handle nonuniformity.
RandomnessDependent Message Security
, 2012
"... Traditional definitions of the security of encryption schemes assume that the messages encrypted are chosen independently of the randomness used by the encryption scheme. Recent works, implicitly by Myers and Shelat (FOCS’09) and Bellare et al (AsiaCrypt’09), and explicitly by Hemmenway and Ostrovsk ..."
Abstract
 Add to MetaCart
Traditional definitions of the security of encryption schemes assume that the messages encrypted are chosen independently of the randomness used by the encryption scheme. Recent works, implicitly by Myers and Shelat (FOCS’09) and Bellare et al (AsiaCrypt’09), and explicitly by Hemmenway and Ostrovsky (ECCC’10), consider randomnessdependent message (RDM) security of encryption schemes, where the message to be encrypted may be selected as a function—referred to as the RDM function—of the randomness used to encrypt this particular message, or other messages, but in a circular way. We carry out a systematic study of this notion. Our main results demonstrate the following: • Full RDM security—where the RDM function may be an arbitrary polynomialsize circuit— is not possible. • Any secure encryption scheme can be slightly modified, by just performing some preprocessing to the randomness, to satisfy boundedRDM security, where the RDM function is restricted to be a circuit of a priori bounded polynomial size. The scheme, however,
Revised version is submitted to IEEE Trans. Computers. Milder Definitions of Computational Approximability: The Case of ZeroKnowledge Protocols
"... Many cryptographic primitives—such as pseudorandom generators, encryption schemes, and zeroknowledge proofs—center around the notion of approximability. For instance, a pseudorandom generator is an expanding function which on a random seed, approximates the uniform distribution. In this paper, we c ..."
Abstract
 Add to MetaCart
Many cryptographic primitives—such as pseudorandom generators, encryption schemes, and zeroknowledge proofs—center around the notion of approximability. For instance, a pseudorandom generator is an expanding function which on a random seed, approximates the uniform distribution. In this paper, we classify different notions of computational approximability in the literature, and provide several new types of approximability. More specifically, we identify two hierarchies of computational approximability: The first hierarchy ranges from strong approximability—which is the most common type in the cryptography—to the weak approximability—as defined by Dwork et al. (FOCS 1999). We define semistrong, mild, and semiweak types as well. The second hierarchy, termed Kapproximability, is inspired by the εapproximability of Dwork et al. (STOC 1998). Kapproximability has the same levels as the first hierarchy, ranging from strong Kapproximability to weak Kapproximability. While both hierarchies are general and can be used to define various cryptographic constructs with different levels of security, they are best illustrated in the context of zeroknowledge protocols. Assuming the existence of (trapdoor) oneway permutations, and exploiting the random oracle model, we present a separation between two definitions of zero knowledge: one based on strong Kapproximability, and the other based on semistrong Kapproximability. Especially, we present a protocol which is zero knowledge only