Results 1  10
of
25
Careful with composition: Limitations of the indifferentiability framework
 EUROCRYPT 2011, volume 6632 of LNCS
, 2011
"... We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any cryptosystem. We characterize the uncovered limitation of the indifferentiability framework by showing that the formalizations used thus far implicitly exclude security notions captured by experiments that have multiple, disjoint adversarial stages. Examples include deterministic publickey encryption (PKE), passwordbased cryptography, hash function nonmalleability, keydependent message security, and more. We formalize a stronger notion, reset indifferentiability, that enables an indifferentiabilitystyle composition theorem covering such multistage security notions, but then show that practical hash constructions cannot be reset indifferentiable. We discuss how these limitations also affect the universal composability framework. We finish by showing the chosendistribution attack security (which requires a multistage game) of some important publickey encryption schemes built using a hash construction paradigm introduced by Dodis, Ristenpart, and Shrimpton. 1
MessageLocked Encryption and Secure Deduplication
, 2012
"... We formalize a new cryptographic primitive, MessageLocked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (spaceefficient secure outsourced storage), a goal currently targeted ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We formalize a new cryptographic primitive, MessageLocked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (spaceefficient secure outsourced storage), a goal currently targeted by numerous cloudstorage providers. We provide definitions both for privacy and for a form of integrity that we call tag consistency. Based on this foundation, we make both practical and theoretical contributions. On the practical side, we provide ROM security analyses of a natural family of MLE schemes that includes deployed schemes. On the theoretical side the challenge is standard model solutions, and we make connections with deterministic encryption, hash functions secure on correlated inputs and the samplethenextract paradigm to deliver schemes under different assumptions and for different classes of message sources. Our work shows that MLE is a primitive of both practical
Efficient Garbling from a FixedKey Blockcipher
, 2013
"... Abstract. We advocate schemes based on fixedkey AES as the best route to highly efficient circuitgarbling. We provide such schemes making only one AES call per garbledgate evaluation. On the theoretical side, we justify the security of these methods in the randompermutation model, where parties h ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We advocate schemes based on fixedkey AES as the best route to highly efficient circuitgarbling. We provide such schemes making only one AES call per garbledgate evaluation. On the theoretical side, we justify the security of these methods in the randompermutation model, where parties have access to a public random permutation. On the practical side, we provide the JustGarble system, which implements our schemes. JustGarble evaluates moderatesized garbledcircuits at an
Multipropertypreserving Domain Extension Using Polynomialbased Modes of Operation
 Advances in cryptology – EUROcrYPT’10, LNCS
"... Abstract. In this paper, we propose a new doublepiped mode of operation for multipropertypreserving domain extension of MACs (message authentication codes), PRFs (pseudorandom functions) and PROs (pseudorandom oracles). Our mode of operation performs twice as fast as the original doublepiped mode ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we propose a new doublepiped mode of operation for multipropertypreserving domain extension of MACs (message authentication codes), PRFs (pseudorandom functions) and PROs (pseudorandom oracles). Our mode of operation performs twice as fast as the original doublepiped mode of operation of Lucks [15] while providing comparable security. Our construction, which uses a class of polynomialbased compression functions proposed by Stam [22, 23], makes a single call to a 3nbit to nbit primitive at each iteration and uses a finalization function f2 at the last iteration, producing an nbit hash function H[f1, f2] satisfying the following properties. 1. H[f1, f2] is unforgeable up to O(2 n /n) query complexity as long as f1 and f2 are unforgeable. 2. H[f1, f2] is pseudorandom up to O(2 n /n) query complexity as long as f1 is unforgeable and f2 is pseudorandom. 3. H[f1, f2] is indifferentiable from a random oracle up to O(2 2n/3) query complexity as long as f1 and f2 are public random functions. To our knowledge, our result constitutes the first time O(2 n /n) unforgeability has been achieved using only an unforgeable primitive of nbit output length. (Yasuda showed unforgeability of O(2 5n/6) for Lucks ’ construction assuming an unforgeable primitive, but the analysis is suboptimal; in the appendix, we show how Yasuda’s bound can be improved to O(2 n).) In related work, we strengthen Stam’s collision resistance analysis of polynomialbased compression functions (showing that unforgeability of the primitive suffices) and discuss how to implement our mode by replacing f1 with a 2nbit key blockcipher in DaviesMeyer mode or by replacing f1 with the cascade of two 2nbit to nbit compression functions. 1
Blockcipher Based Hashing Revisited
 Fast Software Encryption – FSE ’09
, 2009
"... Abstract. We revisit the rate1 blockcipher based hash functions as first studied by Preneel, Govaerts and Vandewalle (Crypto’93) and later extensively analysed by Black, Rogaway and Shrimpton (Crypto’02). We analyse a further generalization where any pre and postprocessing is considered. This lead ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We revisit the rate1 blockcipher based hash functions as first studied by Preneel, Govaerts and Vandewalle (Crypto’93) and later extensively analysed by Black, Rogaway and Shrimpton (Crypto’02). We analyse a further generalization where any pre and postprocessing is considered. This leads to a clearer understanding of the current classification of rate1 blockcipher based schemes as introduced by Preneel et al. and refined by Black et al. In addition, we also gain insight in chopped, overloaded and supercharged compression functions. In the latter category we propose two compression functions based on a single call to a blockcipher whose collision resistance exceeds the birthday bound on the cipher’s blocklength. 1
S.: Some observations on indifferentiability
 In: Information Security and Privacy. Lecture Notes in Computer Science
, 2010
"... Abstract. At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions: If one cannot differentiate a hash function using ideal primitives from a random oracle, it is considered structurally sound, while the ability to differenti ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions: If one cannot differentiate a hash function using ideal primitives from a random oracle, it is considered structurally sound, while the ability to differentiate it from a random oracle indicates a structural weakness. This model was devised as atool tosee subtle real world weaknesses while in the random oracle world. In this paper we take in a practical point of view. We show, using well known examples like NMACand the MixCompressMix (MCM) construction, how we can prove a hash construction secure and insecure at the same time in the indifferentiability setting. These constructions do not differ in their implementation but only on an abstract level. Naturally, this gives rise to the question what to conclude for the implemented hash function. Ourresultscastdoubtsaboutthenotionof“indifferentiabilityfromarandomoracle ” tobeamandatory, practically relevant criterion (as e.g., proposed by Knudsen [16] for the SHA3 competition) to separate good hash structures from bad ones.
A Modular Design for Hash Functions: Towards Making the MixCompressMix Approach Practical
, 2009
"... The design of cryptographic hash functions is a very complex and failureprone process. For this reason, this paper puts forward a completely modular and faulttolerant approach to the construction of a fullfledged hash function from an underlying simpler hash function H and a further primitive F ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The design of cryptographic hash functions is a very complex and failureprone process. For this reason, this paper puts forward a completely modular and faulttolerant approach to the construction of a fullfledged hash function from an underlying simpler hash function H and a further primitive F (such as a block cipher), with the property that collision resistance of the construction only relies on H, whereas indifferentiability from a random oracle follows from F being ideal. In particular, the failure of one of the two components must not affect the security property implied by the other component. The MixCompressMix (MCM) approach by Ristenpart and Shrimpton (ASIACRYPT 2007) envelops the hash function H between two injective mixing steps, and can be interpreted as a first attempt at such a design. However, the proposed instantiation of the mixing steps, based on block ciphers, makes the resulting hash function impractical: First, it cannot be evaluated online, and second, it produces larger hash values than H, while only inheriting the collisionresistance guarantees for the shorter output. Additionally, it relies on a trapdoor oneway permutation, which seriously compromises the use of the resulting hash function for random oracle instantiation in certain scenarios. This paper presents the first efficient modular hash function with online evaluation and short output length. The core of our approach are novel blockcipher based designs for the mixing steps of the MCM approach which rely on significantly weaker assumptions: The first mixing step is realized without any computational assumptions (besides the underlying cipher being ideal), whereas the second mixing step only requires a oneway permutation without a trapdoor, which we prove to be the minimal assumption for the construction of injective random oracles.
Impossibility Results for Indifferentiability with Resets
"... Abstract. The indifferentiability framework of Maurer, Renner, and Holenstein (MRH) has gained immense popularity in recent years and has proved to be a powerful way to argue security of cryptosystems that enjoy proofs in the random oracle model. Recently, however, Ristenpart, Shacham, and Shrimpton ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The indifferentiability framework of Maurer, Renner, and Holenstein (MRH) has gained immense popularity in recent years and has proved to be a powerful way to argue security of cryptosystems that enjoy proofs in the random oracle model. Recently, however, Ristenpart, Shacham, and Shrimpton (RSS) showed that the composition theorem of MRH has a more limited scope than originally thought, and that extending its scope required the introduction of resetindifferentiability, a notion which no practical domain extenders satisfy with respect to random oracles. In light of the results of RSS, we set out to rigorously tackle the specifics of indifferentiability and resetindifferentiability by viewing the notions as special cases of a more general definition. Our contributions are twofold. Firstly, we provide the necessary formalism to refine the notion of indifferentiability regarding composition. By formalizing the definition of stage minimal games we expose new notions lying in between regular indifferentiability (MRH) and resetindifferentiability (RSS). Secondly, we answer the open problem of RSS by showing that it is impossible to build any domain extender which is resetindifferentiable from a random oracle. This result formally confirms the intuition that resetindifferentiability is too strong of a notion to be satisfied by any hash function. As a consequence we look at the weaker notion of singleresetindifferentiability, yet there as well we demonstrate that there are no “meaningful ” domain extenders which satisfy this notion. Not all is lost though, as we also view indifferentiability in a more general setting and point out the possibility for different variants of indifferentiability.
Digital Signatures with Minimal Overhead
"... In a digital signature scheme with message recovery, rather than transmitting the message m and its signature σ, a single enhanced signature τ is transmitted. The verifier is able to recover m from τ and at the same time verify its authenticity. The two most important parameters of such a scheme are ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In a digital signature scheme with message recovery, rather than transmitting the message m and its signature σ, a single enhanced signature τ is transmitted. The verifier is able to recover m from τ and at the same time verify its authenticity. The two most important parameters of such a scheme are its security and the overhead τ  − m. A simple argument shows that for any scheme with “n bits security ” τ  − m  ≥ n, i.e., the overhead is at least the security. The best previous constructions required an overhead of 2n. In this paper we show that the n bit lower bound can basically be matched. Concretely, we propose a new simple RSAbased digital signature scheme that, for n = 80 bits security in the random oracle model, has an overhead of ≈ 90 bits. At the core of our security analysis is an almost tight upper bound for the expected number of edges of the densest “small ” subgraph of a random Cayley graph, which may be of independent interest.