Results 1  10
of
29
Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices
"... RSA and DSA can fail catastrophically when used with malfunctioning random number generators, but the extent to which these problems arise in practice has never been comprehensively studied at Internet scale. We perform the largest ever network survey of TLS and SSH servers and present evidence that ..."
Abstract

Cited by 60 (10 self)
 Add to MetaCart
(Show Context)
RSA and DSA can fail catastrophically when used with malfunctioning random number generators, but the extent to which these problems arise in practice has never been comprehensively studied at Internet scale. We perform the largest ever network survey of TLS and SSH servers and present evidence that vulnerable keys are surprisingly widespread. We find that 0.75 % of TLS certificates share keys due to insufficient entropy during key generation, and we suspect that another 1.70 % come from the same faulty implementations and may be susceptible to compromise. Even more alarmingly, we are able to obtain RSA private keys for 0.50 % of TLS hosts and 0.03 % of SSH hosts, because their public keys shared nontrivial common factors due to entropy problems, and DSA private keys for 1.03 % of SSH hosts, because of insufficient signature randomness. We cluster and investigate the vulnerable hosts, finding that the vast majority appear to be headless or embedded devices. In experiments with three software components commonly used by these devices, we are able to reproduce the vulnerabilities and identify specific software behaviors that induce them, including a boottime entropy hole in the Linux random number generator. Finally, we suggest defenses and draw lessons for developers, users, and the security community. 1
More constructions of lossy and correlationsecure trapdoor functions. Cryptology ePrint Archive, Report 2009/590
, 2009
"... We propose new and improved instantiations of lossy trapdoor functions (Peikert and Waters, STOC ’08), and correlationsecure trapdoor functions (Rosen and Segev, TCC ’09). Our constructions widen the set of numbertheoretic assumptions upon which these primitives can be based, and are summarized as ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
(Show Context)
We propose new and improved instantiations of lossy trapdoor functions (Peikert and Waters, STOC ’08), and correlationsecure trapdoor functions (Rosen and Segev, TCC ’09). Our constructions widen the set of numbertheoretic assumptions upon which these primitives can be based, and are summarized as follows: • Lossy trapdoor functions based on the quadratic residuosity assumption. Our construction relies on modular squaring, and whereas previous such constructions were based on seemingly stronger assumptions, we present the first construction that is based solely on the quadratic residuosity assumption. We also present a generalization to higher order power residues. • Lossy trapdoor functions based on the composite residuosity assumption. Our construction guarantees essentially any required amount of lossiness, where at the same time the functions are more efficient than the matrixbased approach of Peikert and Waters. • Lossy trapdoor functions based on the dLinear assumption. Our construction both simplifies the DDHbased construction of Peikert and Waters, and admits a generalization to the whole family of dLinear assumptions without any loss of efficiency. • Correlationsecure trapdoor functions related to the hardness of syndrome decoding. Keywords: Publickey encryption, lossy trapdoor functions, correlationsecure trapdoor functions. An extended abstract of this work appears in Public Key Cryptography — PKC 2010, Springer LNCS 6056
MessageLocked Encryption and Secure Deduplication
, 2012
"... We formalize a new cryptographic primitive, MessageLocked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (spaceefficient secure outsourced storage), a goal currently targeted ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
We formalize a new cryptographic primitive, MessageLocked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (spaceefficient secure outsourced storage), a goal currently targeted by numerous cloudstorage providers. We provide definitions both for privacy and for a form of integrity that we call tag consistency. Based on this foundation, we make both practical and theoretical contributions. On the practical side, we provide ROM security analyses of a natural family of MLE schemes that includes deployed schemes. On the theoretical side the challenge is standard model solutions, and we make connections with deterministic encryption, hash functions secure on correlated inputs and the samplethenextract paradigm to deliver schemes under different assumptions and for different classes of message sources. Our work shows that MLE is a primitive of both practical
A unified approach to deterministic encryption: New constructions and a connection to computational entropy
 TCC 2012, volume 7194 of LNCS
, 2012
"... We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide: • A construction from any trapdoor function that has sufficiently many hardcore bits. • A construction that provides “bounded ” multimessa ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide: • A construction from any trapdoor function that has sufficiently many hardcore bits. • A construction that provides “bounded ” multimessage security from lossy trapdoor functions. The security proofs for these schemes are enabled by three tools that are of broader interest: • A weaker and more precise sufficient condition for semantic security on a highentropy message distribution. Namely, we show that to establish semantic security on a distribution M of messages, it suffices to establish indistinguishability for all conditional distribution ME, where E is an event of probability at least 1/4. (Prior work required indistinguishability on all distributions of a given entropy.) • A result about computational entropy of conditional distributions. Namely, we show that conditioning on an event E of probability p reduces the quality of computational entropy by a factor of p and its quantity by log 2 1/p. • A generalization of leftover hash lemma to correlated distributions. We also extend our result about computational entropy to the average case, which is useful in reasoning about leakageresilient cryptography: leaking λ bits of information reduces the quality of computational entropy by a factor of 2 λ and its quantity by λ.
Careful with composition: Limitations of the indifferentiability framework
 EUROCRYPT 2011, volume 6632 of LNCS
, 2011
"... We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any cryptosystem. We characterize the uncovered limitation of the indifferentiability framework by showing that the formalizations used thus far implicitly exclude security notions captured by experiments that have multiple, disjoint adversarial stages. Examples include deterministic publickey encryption (PKE), passwordbased cryptography, hash function nonmalleability, keydependent message security, and more. We formalize a stronger notion, reset indifferentiability, that enables an indifferentiabilitystyle composition theorem covering such multistage security notions, but then show that practical hash constructions cannot be reset indifferentiable. We discuss how these limitations also affect the universal composability framework. We finish by showing the chosendistribution attack security (which requires a multistage game) of some important publickey encryption schemes built using a hash construction paradigm introduced by Dodis, Ristenpart, and Shrimpton. 1
Better security for deterministic publickey encryption: The auxiliaryinput setting
 CRYPTO 2011, volume 6841 of LNCS
, 2011
"... Deterministic publickey encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized publickey encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Deterministic publickey encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized publickey encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security when the plaintext is distributed over a small set. Bellare et al. addressed this difficulty by requiring semantic security to hold only when the plaintext has high minentropy from the adversary’s point of view. In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high minentropy from the adversary’s point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees. We formalize a framework for studying the security of deterministic publickey encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hardtoinvert auxiliary inputs.
Functionprivate identitybased encryption: Hiding the function in functional encryption
 Advances in Cryptology – CRYPTO ’13. Available as Cryptology ePrint Archive, Report 2013/283
, 2013
"... We put forward a new notion, function privacy, in identitybased encryption and, more generally, in functional encryption. Intuitively, our notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. This is motivated b ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We put forward a new notion, function privacy, in identitybased encryption and, more generally, in functional encryption. Intuitively, our notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. This is motivated by the need for providing predicate privacy in publickey searchable encryption. Formalizing such a notion, however, is not straightforward as given a decryption key it is always possible to learn some information on its corresponding identity by testing whether it correctly decrypts ciphertexts that are encrypted for specific identities. In light of such an inherent difficulty, any meaningful notion of function privacy must be based on the minimal assumption that, from the adversary’s point of view, identities that correspond to its given decryption keys are sampled from somewhat unpredictable distributions. We show that this assumption is in fact sufficient for obtaining a strong and realistic notion of function privacy. Loosely speaking, our framework requires that a decryption key corresponding to an identity sampled from any sufficiently unpredictable distribution is indistinguishable from a decryption key corresponding to an independently and uniformly sampled identity. Within our framework we develop an approach for designing functionprivate identitybased encryption schemes, leading to constructions that are based on standard assumptions in bilinear groups (DBDH, DLIN) and lattices (LWE). In addition to function privacy, our schemes are also anonymous, and thus yield the first publickey searchable encryption schemes that are provably
PolyMany Hardcore Bits for Any OneWay Function
, 2014
"... We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any oneway function. In the case the oneway function is injective or has polynomiallybounded preimage size, we assume the existence of indistinguishability obfuscation (iO). In the general case, we assume ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any oneway function. In the case the oneway function is injective or has polynomiallybounded preimage size, we assume the existence of indistinguishability obfuscation (iO). In the general case, we assume the existence of differinginput obfuscation (diO), but of a form weaker than full auxiliaryinput diO. Our construction for injective oneway functions extends to extract hardcore bits on multiple, correlated inputs, yielding new DPKE schemes.
K.: Security audits of multitier virtual infrastructures in public infrastructure clouds
 In: Proceedings of the 2010 ACM Workshop on Cloud Computing Security
, 2010
"... infrasightlabs.com Cloud computing has gained remarkable popularity in the recent years by a wide spectrum of consumers, ranging from small startups to governments. However, its benefits in terms of flexibility, scalability, and low upfront investments, are shadowed by security challenges which inh ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
infrasightlabs.com Cloud computing has gained remarkable popularity in the recent years by a wide spectrum of consumers, ranging from small startups to governments. However, its benefits in terms of flexibility, scalability, and low upfront investments, are shadowed by security challenges which inhibit its adoption. Managed through a webservices interface, users can configure highly flexible but complex cloud computing environments. Furthermore, users misconfiguring such cloud services poses a severe security risk that can lead to security incidents, e.g., erroneous exposure of services due to faulty network security configurations. In this article we present a novel approach in the security assessment of the enduser configuration of multitier architectures deployed on infrastructure clouds such as Amazon EC2. In order to perform this assessment for the currently deployed configuration, we automated the process of extracting the configuration using the Amazon API. In the assessment we focused on the reachability and vulnerability of services in the virtual infrastructure, and presented a way for the visualization and automated analysis based on reachability and attack graphs. We proposed a query and policy language for the analysis which can be used to obtain insights into the configuration and to specify desired and undesired configurations. We have implemented the security assessment in a prototype and evaluated it for practical scenarios. Our approach effectively allows to remediate today’s security concerns through validation of configurations of complex cloud infrastructures. 1.
Deterministic PublicKey Encryption for Adaptively Chosen Plaintext Distributions
, 2013
"... Bellare, Boldyreva, and O’Neill (CRYPTO ’07) initiated the study of deterministic publickey encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversariallychosen plaintext distributio ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Bellare, Boldyreva, and O’Neill (CRYPTO ’07) initiated the study of deterministic publickey encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversariallychosen plaintext distributions that are independent of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme. We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to adaptively choose plaintext distributions after seeing the public key, in an interactive manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic publickey encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by 2p, where p can be any predetermined polynomial in the security parameter. For example, with p = 0 we capture plaintext distributions that are independent of the public key, and with p = O(s log s)