Results 11  20
of
65
M.: Indifferentiable security analysis of popular hash functions with prefixfree padding
 ASIACRYPT 2006. LNCS
, 2006
"... Abstract. Understanding what construction strategy has a chance to be a good hash function is extremely important nowadays. In TCC’04, Maurer et al. [13] introduced the notion of indifferentiability as a generalization of the concept of the indistinguishability of two systems. In Crypto’2005, Coron ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Abstract. Understanding what construction strategy has a chance to be a good hash function is extremely important nowadays. In TCC’04, Maurer et al. [13] introduced the notion of indifferentiability as a generalization of the concept of the indistinguishability of two systems. In Crypto’2005, Coron et al. [5] suggested to employ indifferentiability in generic analysis of hash functions and started by suggesting four constructions which enable eliminating all possible generic attacks against iterative hash functions. In this paper we continue this initial suggestion and we give a formal proof of indifferentiability and indifferentiable attack for prefixfree MD hash functions (for single block length (SBL) hash and also some double block length (DBL) constructions) in the random oracle model and in the ideal cipher model. In particular, we observe that there are sixteen PGV hash functions (with prefixfree padding) which are indifferentiable from random oracle model in the ideal cipher model. 1
Hash Functions in the DedicatedKey Setting: Design Choices and MPP Transforms
 In ICALP ’07, volume 4596 of LNCS
, 2007
"... In the dedicatedkey setting, one starts with a compression function f: {0, 1} k ×{0, 1} n+d → {0, 1} n and builds a family of hash functions H f: K × M → {0, 1} n indexed by a key space K. This is different from the more traditional design approach used to build hash functions such as MD5 or SHA1, ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In the dedicatedkey setting, one starts with a compression function f: {0, 1} k ×{0, 1} n+d → {0, 1} n and builds a family of hash functions H f: K × M → {0, 1} n indexed by a key space K. This is different from the more traditional design approach used to build hash functions such as MD5 or SHA1, in which compression functions and hash functions do not have dedicated key inputs. We explore the benefits and drawbacks of building hash functions in the dedicatedkey setting (as compared to the more traditional approach), highlighting several unique features of the former. Should one choose to build hash functions in the dedicatedkey setting, we suggest utilizing multipropertypreserving (MPP) domain extension transforms. We analyze seven existing dedicatedkey transforms with regard to the MPP goal and propose two simple
How to Build a Hash Function from any CollisionResistant Function
, 2007
"... Recent collisionfinding attacks against hash functions such as MD5 and SHA1 motivate the use of provably collisionresistant (CR) functions in their place. Finding a collision in a provably CR function implies the ability to solve some hard problem (e.g., factoring). Unfortunately, existing provab ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Recent collisionfinding attacks against hash functions such as MD5 and SHA1 motivate the use of provably collisionresistant (CR) functions in their place. Finding a collision in a provably CR function implies the ability to solve some hard problem (e.g., factoring). Unfortunately, existing provably CR functions make poor replacements for hash functions as they fail to deliver behaviors demanded by practical use. In particular, they are easily distinguished from a random oracle. We initiate an investigation into building hash functions from provably CR functions. As a method for achieving this, we present the MixCompressMix (MCM) construction; it envelopes any provably CR function H (with suitable regularity properties) between two injective “mixing” stages. The MCM construction simultaneously enjoys (1) provable collisionresistance in the standard model, and (2) indifferentiability from a monolithic random oracle when the mixing stages themselves are indifferentiable from a random oracle that observes injectivity. We instantiate our new design approach by specifying a blockcipherbased construction that
Careful with composition: Limitations of the indifferentiability framework
 EUROCRYPT 2011, volume 6632 of LNCS
, 2011
"... We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any cryptosystem. We characterize the uncovered limitation of the indifferentiability framework by showing that the formalizations used thus far implicitly exclude security notions captured by experiments that have multiple, disjoint adversarial stages. Examples include deterministic publickey encryption (PKE), passwordbased cryptography, hash function nonmalleability, keydependent message security, and more. We formalize a stronger notion, reset indifferentiability, that enables an indifferentiabilitystyle composition theorem covering such multistage security notions, but then show that practical hash constructions cannot be reset indifferentiable. We discuss how these limitations also affect the universal composability framework. We finish by showing the chosendistribution attack security (which requires a multistage game) of some important publickey encryption schemes built using a hash construction paradigm introduced by Dodis, Ristenpart, and Shrimpton. 1
How Risky is the RandomOracle Model?
"... Abstract. RSAFDH and many other schemes secure in the RandomOracle Model (ROM) require a hash function with output size larger than standard sizes. We show that the randomoracle instantiations proposed in the literature for such cases are weaker than a random oracle, including the proposals by Be ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. RSAFDH and many other schemes secure in the RandomOracle Model (ROM) require a hash function with output size larger than standard sizes. We show that the randomoracle instantiations proposed in the literature for such cases are weaker than a random oracle, including the proposals by Bellare and Rogaway from 1993 and 1996, and the ones implicit in IEEE P1363 and PKCS standards: for instance, we obtain a practical preimage attack on BR93 for 1024bit digests (with complexity less than 2 30). Next, we study the security impact of hash function defects for ROM signatures. As an extreme case, we note that any hash collision would suffice to disclose the master key in the IDbased cryptosystem by Boneh et al. from FOCS ’07, and the secret key in the RabinWilliams signature for which Bernstein proved tight security at EUROCRYPT ’08. We also remark that collisions can be found as a precomputation for any instantiation of the ROM, and this violates the security definition of the scheme in the standard model. Hence, this gives an example of a natural scheme that is proven secure in the ROM but that in insecure for any instantiation by a single function. Interestingly, for both of these schemes, a slight modification can prevent these attacks, while preserving the ROM security result. We give evidence that in the case of RSA and Rabin/RabinWilliams, an appropriate PSS padding is more robust than all other paddings known. 1
Amplifying Collision Resistance: A ComplexityTheoretic Treatment
 Advances in Cryptology — Crypto 2007, Volume 4622 of Lecture
"... Abstract. We initiate a complexitytheoretic treatment of hardness amplification for collisionresistant hash functions, namely the transformation of weakly collisionresistant hash functions into strongly collisionresistant ones in the standard model of computation. We measure the level of collisi ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract. We initiate a complexitytheoretic treatment of hardness amplification for collisionresistant hash functions, namely the transformation of weakly collisionresistant hash functions into strongly collisionresistant ones in the standard model of computation. We measure the level of collision resistance by the maximum probability, over the choice of the key, for which an efficient adversary can find a collision. The goal is to obtain constructions with short output, short keys, small loss in adversarial complexity tolerated, and a good tradeoff between compression ratio and computational complexity. We provide an analysis of several simple constructions, and show that many of the parameters achieved by our constructions are almost optimal in some sense.
Hardness of distinguishing the MSB or LSB of secret keys
 in DiffieHellman schemes, ICALP
, 2006
"... Abstract. In this paper we introduce very simple deterministic randomness extractors for DiffieHellman distributions. More specifically we show that the k most significant bits or the k least significant bits of a random element in a subgroup of Z ⋆ p are indistinguishable from a random bitstring ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. In this paper we introduce very simple deterministic randomness extractors for DiffieHellman distributions. More specifically we show that the k most significant bits or the k least significant bits of a random element in a subgroup of Z ⋆ p are indistinguishable from a random bitstring of the same length. This allows us to show that under the Decisional DiffieHellman assumption we can deterministically derive a uniformly random bitstring from a DiffieHellman exchange in the standard model. Then, we show that it can be used in key exchange or encryption scheme to avoid the leftover hash lemma and universal hash functions. Keywords: DiffieHellman transform, randomness extraction, least significant bits, exponential sums. 1
A CollisionResistant Rate1 DoubleBlockLength Hash Function
"... (on the leave to BauhausUniversity Weimar, Germany) Abstract. This paper proposes a construction for collision resistant 2nbit hash functions, based on nbit block ciphers with 2nbit keys. The construction is analysed in the ideal cipher model; for n = 128 an adversary would need roughly 2 122 un ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(on the leave to BauhausUniversity Weimar, Germany) Abstract. This paper proposes a construction for collision resistant 2nbit hash functions, based on nbit block ciphers with 2nbit keys. The construction is analysed in the ideal cipher model; for n = 128 an adversary would need roughly 2 122 units of time to find a collision. The construction employs “combinatorial ” hashing as an underlying building block (like Universal Hashing for cryptographic message authentication by Wegman and Carter). The construction runs at rate 1, thus improving on a similar rate 1/2 approach by Hirose (FSE 2006). 1
Domain extension of public random functions: Beyond the birthday barrier
 In Advances in Cryptology – CRYPTO ’07 (2007), Lecture Notes in Computer Science
, 2007
"... Combined with the iterated constructions of Coron et al., our result leads to the first iterated construction of a hash function f0; 1g\Lambda ! f0; 1gn from a component function f0; 1gn! f0; 1gn that withstands all recently proposed generic attacks against iterated hash functions, like Joux's multi ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Combined with the iterated constructions of Coron et al., our result leads to the first iterated construction of a hash function f0; 1g\Lambda ! f0; 1gn from a component function f0; 1gn! f0; 1gn that withstands all recently proposed generic attacks against iterated hash functions, like Joux's multicollision attack, Kelsey and Schneier's secondpreimage attack, and Kelsey and Kohno's herding attacks. 1 Introduction 1.1 Secret vs. Public Random Functions Primitives that provide some form of randomness are of central importance in cryptography, both as a primitive assumed to be given (e.g. a secret key), and as a primitive constructed from a weaker one to "behave like " a certain ideal random primitive (e.g. a random function), according to some security notion.