Results 1  10
of
10
Security Amplification for Interactive Cryptographic Primitives
"... Abstract. Security amplification is an important problem in Cryptography: starting with a “weakly secure ” variant of some cryptographic primitive, the goal is to build a “strongly secure ” variant of the same primitive. This question has been successfully studied for a variety of important cryptogr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. Security amplification is an important problem in Cryptography: starting with a “weakly secure ” variant of some cryptographic primitive, the goal is to build a “strongly secure ” variant of the same primitive. This question has been successfully studied for a variety of important cryptographic primitives, such as oneway functions, collisionresistant hash functions, encryption schemes and weakly verifiable puzzles. However, all these tasks were noninteractive. In this work we study security amplification of interactive cryptographic primitives, such as message authentication codes (MACs), digital signatures (SIGs) and pseudorandom functions (PRFs). In particular, we prove direct product theorems for MACs/SIGs and an XOR lemma for PRFs, therefore obtaining nearly optimal security amplification for these primitives. Our main technical result is a new Chernofftype theorem for what we call Dynamic Weakly Verifiable Puzzles, which is a generalization of ordinary Weakly Verifiable Puzzles which we introduce in this paper. 1
Compression from collisions, or why CRHF combiners have a long output
 Advances in Cryptology – CRYPTO 2008. Lecture Notes in Computer Science
, 2004
"... Abstract. A blackbox combiner for collision resistant hash functions (CRHF) is a construction which given blackbox access to two hash functions is collision resistant if at least one of the components is collision resistant. In this paper we prove a lower bound on the output length of blackbox co ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. A blackbox combiner for collision resistant hash functions (CRHF) is a construction which given blackbox access to two hash functions is collision resistant if at least one of the components is collision resistant. In this paper we prove a lower bound on the output length of blackbox combiners for CRHFs. The bound we prove is basically tight as it is achieved by a recent construction of Canetti et al [Crypto’07]. The best previously known lower bounds only ruled out a very restricted class of combiners having a very strong security reduction: the reduction was required to output collisions for both underlying candidate hashfunctions given a single collision for the combiner (Canetti et al [Crypto’07] building on Boneh and Boyen [Crypto’06] and Pietrzak [Eurocrypt’07]). Our proof uses a lemma similar to the elegant “reconstruction lemma ” of Gennaro and Trevisan [FOCS’00], which states that any function which is not oneway is compressible (and thus uniformly random function must be oneway). In a similar vein we show that a function which is not collision resistant is compressible. We also borrow ideas from recent work by Haitner et al. [FOCS’07], who show that one can prove the reconstruction lemma even relative to some very powerful oracles (in our case this will be an exponential time collisionfinding oracle). 1
Universal OneWay Hash Functions via Inaccessible Entropy
, 2010
"... This paper revisits the construction of Universal OneWay Hash Functions (UOWHFs) from any oneway function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs, which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper revisits the construction of Universal OneWay Hash Functions (UOWHFs) from any oneway function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs, which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of inaccessible entropy (Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any oneway function f is already a weak form of a UOWHF: Consider F (x, i) that outputs the ibit long prefix of f(x). If F were a UOWHF then given a random x and i it would be hard to come up with x ′ ̸ = x such that F (x, i) = F (x ′ , i). While this may not be the case, we show (rather easily) that it is hard to sample x ′ with almost full entropy among all the possible such values of x ′. The rest of our construction simply amplifies and exploits this basic property. With this and other recent works, we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of oneway functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the wellestablished notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy.
Locally Computable UOWHF with Linear Shrinkage ∗
"... We study the problem of constructing locally computable Universal OneWay Hash Functions (UOWHFs) H: {0, 1} n → {0, 1} m. A construction with constant output locality, where every bit of the output depends only on a constant number of bits of the input, was established by [Applebaum, Ishai, and Kush ..."
Abstract
 Add to MetaCart
(Show Context)
We study the problem of constructing locally computable Universal OneWay Hash Functions (UOWHFs) H: {0, 1} n → {0, 1} m. A construction with constant output locality, where every bit of the output depends only on a constant number of bits of the input, was established by [Applebaum, Ishai, and Kushilevitz, SICOMP 2006]. However, this construction suffers from two limitations: (1) It can only achieve a sublinear shrinkage of n − m = n 1−ɛ; and (2) It has a superconstant input locality, i.e., some inputs influence a large superconstant number of outputs. This leaves open the question of realizing UOWHFs with constant output locality and linear shrinkage of n−m = ɛn, or UOWHFs with constant input locality and minimal shrinkage of n − m = 1. We settle both questions simultaneously by providing the first construction of UOWHFs with linear shrinkage, constant input locality, and constant output locality. Our construction is based on the onewayness of “random ” local functions – a variant of an assumption made by Goldreich (ECCC 2000). Using a transformation of [Ishai, Kushilevitz, Ostrovsky and Sahai, STOC 2008], our UOWHFs give rise to a digital signature scheme with a minimal additive complexity overhead: signing nbit messages with security parameter κ takes only O(n + κ) time instead of O(nκ) as in typical constructions. Previously, such signatures were only known to exist under an exponential hardness assumption. As an additional contribution, we obtain new locallycomputable hardness amplification procedures for UOWHFs that preserve linear shrinkage. 1
Analysis of Primitives and Protocols Editor
, 2010
"... PU Public X PP Restricted to other programme participants (including the Commission services) RE Restricted to a group specified by the consortium (including the Commission services) CO Confidential, only for members of the consortium (including the Commission services) Jointly Executed Research Act ..."
Abstract
 Add to MetaCart
(Show Context)
PU Public X PP Restricted to other programme participants (including the Commission services) RE Restricted to a group specified by the consortium (including the Commission services) CO Confidential, only for members of the consortium (including the Commission services) Jointly Executed Research Activities on Design and
Activities on Design and Analysis of Primitives and Protocols Editor
"... PU Public X PP Restricted to other programme participants (including the Commission services) RE Restricted to a group specified by the consortium (including the Commission services) CO Confidential, only for members of the consortium (including the Commission services) Final Report on Jointly Execu ..."
Abstract
 Add to MetaCart
(Show Context)
PU Public X PP Restricted to other programme participants (including the Commission services) RE Restricted to a group specified by the consortium (including the Commission services) CO Confidential, only for members of the consortium (including the Commission services) Final Report on Jointly Executed Research
Counterexamples to Hardness Amplification Beyond Negligible
, 2012
"... If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated on ..."
Abstract
 Add to MetaCart
If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifieshardnessisoftenhighlynontrivial,andinsomecasesmaybefalse. Forexample, it is known that the direct product (i.e. “parallel repetition”) of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as oneway functions/relations, weaklyverifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak oneway function that no polytime attacker can break with probability> 1, then the direct product provably amplifies hardness to some negligible probability. 2 Naturally, we would expect that we can amplify hardness exponentially, all the way to 2−n probability, or at least to some fixed/known negligible such as n−logn in the security parameter n, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via blackbox reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC ’95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and oneway functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function ε(n), we instantiate these primitives so that the direct product can always be broken with probability ε(n), no matter how many copies we take. 1
A preliminary version appears in CTRSA 2010, Lecture Notes in Computer Science, SpringerVerlag, 2010. Hash Function Combiners in TLS and SSL
"... Abstract. The TLS and SSL protocols are widely used to ensure secure communication over an untrusted network. Therein, a client and server first engage in the socalled handshake protocol to establish shared keys that are subsequently used to encrypt and authenticate the data transfer. To ensure tha ..."
Abstract
 Add to MetaCart
Abstract. The TLS and SSL protocols are widely used to ensure secure communication over an untrusted network. Therein, a client and server first engage in the socalled handshake protocol to establish shared keys that are subsequently used to encrypt and authenticate the data transfer. To ensure that the obtained keys are as secure as possible, TLS and SSL deploy hash function combiners for key derivation and the authentication step in the handshake protocol. A robust combiner for hash functions takes two candidate implementations and constructs a hash function which is secure as long as at least one of the candidates is secure. In this work, we analyze the security of the proposed TLS/SSL combiner constructions for pseudorandom functions resp. message authentication codes. 1
A preliminary version appears in TCC, Lecture Notes in Computer Science, SpringerVerlag, 2008. MultiProperty Preserving Combiners
"... www.minicrypt.de Abstract. A robust combiner for hash functions takes two candidate implementations and constructs a hash function which is secure as long as at least one of the candidates is secure. So far, hash function combiners only aim at preserving a single property such as collisionresistanc ..."
Abstract
 Add to MetaCart
www.minicrypt.de Abstract. A robust combiner for hash functions takes two candidate implementations and constructs a hash function which is secure as long as at least one of the candidates is secure. So far, hash function combiners only aim at preserving a single property such as collisionresistance or pseudorandomness. However, when hash functions are used in protocols like TLS they are often required to provide several properties simultaneously. We therefore put forward the notion of multiproperty preserving combiners, clarify some aspects on different definitions for such combiners, and propose a construction that provably preserves collision resistance, pseudorandomness, “randomoracleness”, target collision resistance and message authentication according to our strongest notion. 1
On existence of robust combiners for cryptographic hash functions?
"... Abstract. A (k, l)robust combiner for collision resistant hash functions is a construction, which takes l hash functions and combines them so that if at least k of the components are collision resistant, then so is the resulting combination. A blackbox (k, l)robust combiner is robust combiner ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. A (k, l)robust combiner for collision resistant hash functions is a construction, which takes l hash functions and combines them so that if at least k of the components are collision resistant, then so is the resulting combination. A blackbox (k, l)robust combiner is robust combiner, which takes its components as blackboxes. A trivial blackbox combiner is concatenation of any (l−k+1) of the hash functions. Boneh and Boyen [1] followed by Pietrzak [3] proved, that for collision resistance we cannot do much better that concatenation, i.e. there does not exist black box (k, l)robust combiner for collision resistance, whose output is significantly shorter that the output of the trivial combiner. In this paper we analyze whether robust combiners for other hash function properties (e.g. preimage resistance and second preimage resistance) exist. Key words: Cryptographic hash function, robust combiner, preimage resistance, second preimage resistance 1