Results 1  10
of
35
Design and Analysis of Practical PublicKey Encryption Schemes Secure against Adaptive Chosen Ciphertext Attack
 SIAM Journal on Computing
, 2001
"... A new public key encryption scheme, along with several variants, is proposed and analyzed. The scheme and its variants are quite practical, and are proved secure against adaptive chosen ciphertext attack under standard intractability assumptions. These appear to be the first publickey encryption sc ..."
Abstract

Cited by 189 (11 self)
 Add to MetaCart
A new public key encryption scheme, along with several variants, is proposed and analyzed. The scheme and its variants are quite practical, and are proved secure against adaptive chosen ciphertext attack under standard intractability assumptions. These appear to be the first publickey encryption schemes in the literature that are simultaneously practical and provably secure.
Signature Schemes Based on the Strong RSA Assumption
 ACM TRANSACTIONS ON INFORMATION AND SYSTEM SECURITY
, 1998
"... We describe and analyze a new digital signature scheme. The new scheme is quite efficient, does not require the the signer to maintain any state, and can be proven secure against adaptive chosen message attack under a reasonable intractability assumption, the socalled Strong RSA Assumption. Moreove ..."
Abstract

Cited by 150 (8 self)
 Add to MetaCart
We describe and analyze a new digital signature scheme. The new scheme is quite efficient, does not require the the signer to maintain any state, and can be proven secure against adaptive chosen message attack under a reasonable intractability assumption, the socalled Strong RSA Assumption. Moreover, a hash function can be incorporated into the scheme in such a way that it is also secure in the random oracle model under the standard RSA Assumption.
MerkleDamg˚ard Revisited: How to Construct a Hash Function
 Advances in Cryptology, Crypto 2005
"... The most common way of constructing a hash function (e.g., SHA1) is to iterate a compression function on the input message. The compression function is usually designed from scratch or made out of a blockcipher. In this paper, we introduce a new security notion for hashfunctions, stronger than col ..."
Abstract

Cited by 74 (8 self)
 Add to MetaCart
The most common way of constructing a hash function (e.g., SHA1) is to iterate a compression function on the input message. The compression function is usually designed from scratch or made out of a blockcipher. In this paper, we introduce a new security notion for hashfunctions, stronger than collisionresistance. Under this notion, the arbitrary length hash function H must behave as a random oracle when the fixedlength building block is viewed as a random oracle or an ideal blockcipher. The key property is that if a particular construction meets this definition, then any cryptosystem proven secure assuming H is a random oracle remains secure if one plugs in this construction (still assuming that the underlying fixedlength primitive is ideal). In this paper, we show that the current design principle behind hash functions such as SHA1 and MD5 — the (strengthened) MerkleDamg˚ard transformation — does not satisfy this security notion. We provide several constructions that provably satisfy this notion; those new constructions introduce minimal changes to the plain MerkleDamg˚ard construction and are easily implementable in practice.
Using Hash Functions as a Hedge against Chosen Ciphertext Attack
, 2000
"... The cryptosystem recently proposed by Cramer and Shoup [5] is a practical public key cryptosystem that is secure against adaptive chosen ciphertext attack provided the Decisional DiffieHellman assumption is true. Although this is a reasonable intractability assumption, it would be preferable to bas ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
The cryptosystem recently proposed by Cramer and Shoup [5] is a practical public key cryptosystem that is secure against adaptive chosen ciphertext attack provided the Decisional DiffieHellman assumption is true. Although this is a reasonable intractability assumption, it would be preferable to base a security proof on a weaker assumption, such as the Computational DiffieHellman assumption. Indeed, this cryptosystem in its most basic form is in fact insecure if the Decisional DiffieHellman assumption is false. In this paper we present a practical hybrid scheme that is just as efficient as the scheme of of Cramer and Shoup; we prove that the scheme is secure if the Decisional DiffieHellman assumption is true; we give strong evidence that the scheme is secure if the weaker, Computational DiffieHellman assumption is true by providing a proof of security in the random oracle model.
Strengthening Digital Signatures Via Randomized Hashing
 In CRYPTO
, 2006
"... Abstract. We propose randomized hashing as a mode of operation for cryptographic hash functions intended for use with standard digital signatures and without necessitating of any changes in the internals of the underlying hash function (e.g., the SHA family) or in the signature algorithms (e.g., RSA ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
Abstract. We propose randomized hashing as a mode of operation for cryptographic hash functions intended for use with standard digital signatures and without necessitating of any changes in the internals of the underlying hash function (e.g., the SHA family) or in the signature algorithms (e.g., RSA or DSA). The goal is to free practical digital signature schemes from their current reliance on strong collision resistance by basing the security of these schemes on significantly weaker properties of the underlying hash function, thus providing a safety net in case the (current or future) hash functions in use turn out to be less resilient to collision search than initially thought. We design a specific mode of operation that takes into account engineering considerations (such as simplicity, efficiency and compatibility with existing implementations) as well as analytical soundness. Specifically, the scheme consists of a regular use of the hash function with randomization applied only to the message before it is input to the hash function. We formally show the sufficiency of weaker than collisionresistance assumptions for proving the security of the scheme. 1
A.: On notions of security for deterministic encryption, and efficient constructions without random oracles. Full version of this paper
, 2008
"... Abstract. The study of deterministic publickey encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible ” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic encr ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
Abstract. The study of deterministic publickey encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible ” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic encryption schemes without random oracles. To do so, we propose a slightly weaker notion of security, saying that no partial information about encrypted messages should be leaked as long as each message is apriori hardtoguess given the others (while PRIV did not have the latter restriction). Nevertheless, we argue that this version seems adequate for many practical applications. We show equivalence of this definition to singlemessage and indistinguishabilitybased ones, which are easier to work with. Then we give general constructions of both chosenplaintext (CPA) and chosenciphertextattack (CCA) secure deterministic encryption schemes, as well as efficient instantiations of them under standard numbertheoretic assumptions. Our constructions build on the recentlyintroduced framework of Peikert and Waters (STOC ’08) for constructing CCAsecure probabilistic encryption schemes, extending it to the deterministicencryption setting as well.
Adaptively secure threshold cryptography: Introducing concurrency, removing erasures
, 2000
"... Abstract. We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit effic ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
Abstract. We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit efficient secure threshold protocols for a variety of cryptographic applications. In particular, based on the recent scheme by CramerShoup, we construct adaptively secure threshold cryptosystems secure against adaptive chosen ciphertext attack under the DDH intractability assumption. Our techniques are also applicable to other cryptosystems and signature schemes, like RSA, DSS, and ElGamal. Our techniques include the first efficient implementation, for a wide but special class of protocols, of secure channels in erasurefree adaptive model. Of independent interest, we present the notion of a committed proof. 1
Hash Functions: From MerkleDamgård to Shoup
 EUROCRYPT
, 2001
"... In this paper we study two possible approaches to improving existing schemes for constructing hash functions that hash arbitrary long messages. First, we introduce a continuum of function classes that lie between universal oneway hash functions and collisionresistant functions. For some of these c ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
In this paper we study two possible approaches to improving existing schemes for constructing hash functions that hash arbitrary long messages. First, we introduce a continuum of function classes that lie between universal oneway hash functions and collisionresistant functions. For some of these classes efficient (yielding short keys) composite schemes exist. Second, we prove that the schedule of the Shoup construction, which is the most efficient composition scheme for universal oneway hash functions known so far, is optimal.
Hash Functions in the DedicatedKey Setting: Design Choices and MPP Transforms
 In ICALP ’07, volume 4596 of LNCS
, 2007
"... In the dedicatedkey setting, one starts with a compression function f: {0, 1} k ×{0, 1} n+d → {0, 1} n and builds a family of hash functions H f: K × M → {0, 1} n indexed by a key space K. This is different from the more traditional design approach used to build hash functions such as MD5 or SHA1, ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In the dedicatedkey setting, one starts with a compression function f: {0, 1} k ×{0, 1} n+d → {0, 1} n and builds a family of hash functions H f: K × M → {0, 1} n indexed by a key space K. This is different from the more traditional design approach used to build hash functions such as MD5 or SHA1, in which compression functions and hash functions do not have dedicated key inputs. We explore the benefits and drawbacks of building hash functions in the dedicatedkey setting (as compared to the more traditional approach), highlighting several unique features of the former. Should one choose to build hash functions in the dedicatedkey setting, we suggest utilizing multipropertypreserving (MPP) domain extension transforms. We analyze seven existing dedicatedkey transforms with regard to the MPP goal and propose two simple
Bounds on the Efficiency of Encryption and Digital Signatures
, 2002
"... A central focus of modern cryptography is to investigate the weakest possible assumptions under which various cryptographic algorithms exist. Typically, a proof that a "weak" primitive (e.g., a oneway function) implies the existence of some "strong" algorithm (e.g., a privatekey encryption scheme) ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
A central focus of modern cryptography is to investigate the weakest possible assumptions under which various cryptographic algorithms exist. Typically, a proof that a "weak" primitive (e.g., a oneway function) implies the existence of some "strong" algorithm (e.g., a privatekey encryption scheme) proceeds by giving an explicit construction of the latter from the former. Beyond merely showing such a construction, an equally important research direction is to explore the efficiency of the construction. One might argue that this line of research has become even more important now that minimal assumptions are known for many (but not all) algorithms of interest.