Results 1 
8 of
8
The collision security of TandemDM in the ideal cipher model
"... Abstract. We prove that TandemDM, one of the two “classical ” schemes for turning a blockcipher of 2nbit key into a double block length hash function, has birthdaytype collision resistance in the ideal cipher model. A collision resistance analysis for TandemDM achieving a similar birthdaytype b ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We prove that TandemDM, one of the two “classical ” schemes for turning a blockcipher of 2nbit key into a double block length hash function, has birthdaytype collision resistance in the ideal cipher model. A collision resistance analysis for TandemDM achieving a similar birthdaytype bound was already proposed by Fleischmann, Gorski and Lucks at FSE 2009 [3]. As we detail, however, the latter analysis is wrong, thus leaving the collision resistance of TandemDM as an open problem until now. 1
Multipropertypreserving Domain Extension Using Polynomialbased Modes of Operation
 Advances in cryptology – EUROcrYPT’10, LNCS
"... Abstract. In this paper, we propose a new doublepiped mode of operation for multipropertypreserving domain extension of MACs (message authentication codes), PRFs (pseudorandom functions) and PROs (pseudorandom oracles). Our mode of operation performs twice as fast as the original doublepiped mode ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we propose a new doublepiped mode of operation for multipropertypreserving domain extension of MACs (message authentication codes), PRFs (pseudorandom functions) and PROs (pseudorandom oracles). Our mode of operation performs twice as fast as the original doublepiped mode of operation of Lucks [15] while providing comparable security. Our construction, which uses a class of polynomialbased compression functions proposed by Stam [22, 23], makes a single call to a 3nbit to nbit primitive at each iteration and uses a finalization function f2 at the last iteration, producing an nbit hash function H[f1, f2] satisfying the following properties. 1. H[f1, f2] is unforgeable up to O(2 n /n) query complexity as long as f1 and f2 are unforgeable. 2. H[f1, f2] is pseudorandom up to O(2 n /n) query complexity as long as f1 is unforgeable and f2 is pseudorandom. 3. H[f1, f2] is indifferentiable from a random oracle up to O(2 2n/3) query complexity as long as f1 and f2 are public random functions. To our knowledge, our result constitutes the first time O(2 n /n) unforgeability has been achieved using only an unforgeable primitive of nbit output length. (Yasuda showed unforgeability of O(2 5n/6) for Lucks ’ construction assuming an unforgeable primitive, but the analysis is suboptimal; in the appendix, we show how Yasuda’s bound can be improved to O(2 n).) In related work, we strengthen Stam’s collision resistance analysis of polynomialbased compression functions (showing that unforgeability of the primitive suffices) and discuss how to implement our mode by replacing f1 with a 2nbit key blockcipher in DaviesMeyer mode or by replacing f1 with the cascade of two 2nbit to nbit compression functions. 1
Blockcipher Based Hashing Revisited
 Fast Software Encryption – FSE ’09
, 2009
"... Abstract. We revisit the rate1 blockcipher based hash functions as first studied by Preneel, Govaerts and Vandewalle (Crypto’93) and later extensively analysed by Black, Rogaway and Shrimpton (Crypto’02). We analyse a further generalization where any pre and postprocessing is considered. This lead ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We revisit the rate1 blockcipher based hash functions as first studied by Preneel, Govaerts and Vandewalle (Crypto’93) and later extensively analysed by Black, Rogaway and Shrimpton (Crypto’02). We analyse a further generalization where any pre and postprocessing is considered. This leads to a clearer understanding of the current classification of rate1 blockcipher based schemes as introduced by Preneel et al. and refined by Black et al. In addition, we also gain insight in chopped, overloaded and supercharged compression functions. In the latter category we propose two compression functions based on a single call to a blockcipher whose collision resistance exceeds the birthday bound on the cipher’s blocklength. 1
The preimage security of doubleblocklength compression functions. Cryptology ePrint Archive, Report 2011/210, 2011. http: //eprint.iacr.org
 16 Gatan Leurent, Charles Bouillaguet, and PierreAlain Fouque. SIMD Is a Message Digest
"... Abstract. We give improved bounds on the preimage security of the three “classical ” doubleblocklength, doublecall, blockcipherbased compression functions, these being AbreastDM, TandemDM and Hirose’s scheme. For Hirose’s scheme, we show that an adversary must make at least 2 2n−5 blockcipher q ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We give improved bounds on the preimage security of the three “classical ” doubleblocklength, doublecall, blockcipherbased compression functions, these being AbreastDM, TandemDM and Hirose’s scheme. For Hirose’s scheme, we show that an adversary must make at least 2 2n−5 blockcipher queries to achieve chance 0.5 of inverting a randomly chosen point in the range. For AbreastDM and TandemDM we show that at least 2 2n−10 queries are necessary. These bounds improve upon the previous best bounds of Ω(2 n) queries, and are optimal up to a constant factor since the compression functions in question have range of size 2 2n. 1
More Insights on BlockcipherBased Hash Functions
"... Abstract. In this paper we give more insights on the security of blockcipherbased hash functions. We give a very simple criterion to build a secure large class of SingleBlockLength (SBL) or double call DoubleBlockLength (DBL) compression functions based on (kn, n) blockciphers, where kn is the k ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we give more insights on the security of blockcipherbased hash functions. We give a very simple criterion to build a secure large class of SingleBlockLength (SBL) or double call DoubleBlockLength (DBL) compression functions based on (kn, n) blockciphers, where kn is the key length and n is the block length and k is an integer. This criterion is simpler than previous works in the literature. Based on the criterion, we can get many results from this criterion, and we can get a conclusion on such class of blockcipherbased hash functions. We solved the open problem left by Hirose. Our results show that to build a secure double call DBL compression function, it is required k> = m + 1 where m is the number of message blocks. Thus, we can only build rate 1/2 secure double DBL blockcipherbased compression functions if k = = 2. At last, we pointed out flaws in Stam’s theorem about supercharged functions and gave a revision of this theorem and added another condition for the security of supercharged compression functions. 1
Efficient Hashing using the AES Instruction Set
"... Abstract. In this work, we provide a software benchmark for a large range of 256bit blockcipherbased hash functions. We instantiate the underlying blockcipher with AES, which allows us to exploit the recent AES instruction set (AESNI). Since AES itself only outputs 128 bits, we consider doublebl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this work, we provide a software benchmark for a large range of 256bit blockcipherbased hash functions. We instantiate the underlying blockcipher with AES, which allows us to exploit the recent AES instruction set (AESNI). Since AES itself only outputs 128 bits, we consider doubleblocklength constructions, as well as (singleblocklength) constructions based on RIJNDAEL256. Although we primarily target architectures supporting AESNI, our framework has much broader applications by estimating the performance of these hash functions on any (micro)architecture given AESbenchmark results. As far as we are aware, this is the first comprehensive performance comparison of multiblocklength hash functions in software. 1
BlockcipherBased DoubleLength Hash Functions for Pseudorandom Oracles
"... Abstract. PRO (Pseudorandom Oracle) is an important security of hash functions because it ensures that the hash function inherits all properties of a random oracle up to the PRO bound (e.g., security against length extension attack, collision resistant security, preimage resistant security and so on ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. PRO (Pseudorandom Oracle) is an important security of hash functions because it ensures that the hash function inherits all properties of a random oracle up to the PRO bound (e.g., security against length extension attack, collision resistant security, preimage resistant security and so on). In this paper, we propose new blockcipherbased doublelength hash functions, which are PROs up to O(2 n) query complexity in the ideal cipher model. Our hash functions use a single blockcipher, which encrypts an nbit string using a 2nbit key, and maps an input of arbitrary length to an nbit output. Since many blockciphers supports a 2nbit key (e.g. AES supports a 256bit key), the assumption to use the 2nbit key length blockcipher is acceptable. To our knowledge, this is the first time doublelength hash functions based on a single (practical size) blockcipher with birthday PRO security.
Attacks On a Double Length Blockcipherbased Hash Proposal
"... Abstract. In this paper we attack a 2nbit double length hash function proposed by Lee et al. This proposal is a blockcipherbased hash function with hash rate 2/3. The designers claimed that it could achieve ideal collision resistance and gave a security proof. However, we find a collision attack w ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper we attack a 2nbit double length hash function proposed by Lee et al. This proposal is a blockcipherbased hash function with hash rate 2/3. The designers claimed that it could achieve ideal collision resistance and gave a security proof. However, we find a collision attack with complexity of Ω(2 3n/4) and a preimage attack with complexity of Ω(2 n). Our result shows this construction is much worse than an ideal 2nbit hash function. 1