Results 1  10
of
11
Do Strong Web Passwords Accomplish Anything?
"... We find that traditional password advice given to users is somewhat dated. Strong passwords do nothing to protect online users from password stealing attacks such as phishing and keylogging, and yet they place considerable burden on users. Passwords that are too weak of course invite bruteforce att ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
We find that traditional password advice given to users is somewhat dated. Strong passwords do nothing to protect online users from password stealing attacks such as phishing and keylogging, and yet they place considerable burden on users. Passwords that are too weak of course invite bruteforce attacks. However, we find that relatively weak passwords, about 20 bits or so, are sufficient to make bruteforce attacks on a single account unrealistic so long as a “three strikes ” type rule is in place. Above that minimum it appears that increasing password strength does little to address any real threat. If a larger credential space is needed it appears better to increase the strength of the userID’s rather than the passwords. For large institutions this is just as effective in deterring bulk guessing attacks and is a great deal better for users. For small institutions there appears little reason to require strong passwords for online accounts. 1.
Achieving leakage resilience through dual system encryption
 In TCC
, 2011
"... In this work, we show that strong leakage resilience for cryptosystems with advanced functionalities can be obtained quite naturally within the methodology of dual system encryption, recently introduced by Waters. We demonstrate this concretely by providing fully secure IBE, HIBE, and ABE systems wh ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this work, we show that strong leakage resilience for cryptosystems with advanced functionalities can be obtained quite naturally within the methodology of dual system encryption, recently introduced by Waters. We demonstrate this concretely by providing fully secure IBE, HIBE, and ABE systems which are resilient to bounded leakage from each of many secret keys per user, as well as many master keys. This can be realized as resilience against continual leakage if we assume keys are periodically updated and no (or logarithmic) leakage is allowed during the update process. Our systems are obtained by applying a simple modification to previous dual system encryption constructions: essentially this provides a generic tool for making dual system encryption schemes leakageresilient. 1
Provably Secure HigherOrder Masking of AES
 In CHES 2010, volume 6225 of LNCS
, 2010
"... Abstract. Implementations of cryptographic algorithms are vulnerable to Side Channel Analysis (SCA). To counteract it, masking schemes are usually involved which randomize keydependent data by the addition of one or several random value(s) (the masks). When dthorder masking is involved (i.e. when ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Abstract. Implementations of cryptographic algorithms are vulnerable to Side Channel Analysis (SCA). To counteract it, masking schemes are usually involved which randomize keydependent data by the addition of one or several random value(s) (the masks). When dthorder masking is involved (i.e. when d masks are used per keydependent variable), the complexity of performing an SCA grows exponentially with the order d. The design of generic dthorder masking schemes taking the order d as security parameter is therefore of great interest for the physical security of cryptographic implementations. This paper presents the first generic dthorder masking scheme for AES with a provable security and a reasonable software implementation overhead. Our scheme is based on the hardwareoriented masking scheme published by Ishai et al. at Crypto 2003. Compared to this scheme, our solution can be efficiently implemented in software on any generalpurpose processor. This result is of importance considering the lack of solution for d � 3. 1
Proofs of ownership in remote storage systems
 in Proceedings of the 18th ACM conference on Computer and communications security, ser. CCS ’11
"... Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Clientside deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of u ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Clientside deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit clientside deduplication, allowing an attacker to gain access to arbitrarysize files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofsofownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proofofownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive clientside deduplication.
How to Leak on Key Updates
"... In the continual memory leakage model, security against attackers who can repeatedly obtain leakage is achieved by periodically updating the secret key. This is an appealing model which captures a wide class of sidechannel attacks, but all previous constructions in this model provide only a very mi ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In the continual memory leakage model, security against attackers who can repeatedly obtain leakage is achieved by periodically updating the secret key. This is an appealing model which captures a wide class of sidechannel attacks, but all previous constructions in this model provide only a very minimal amount of leakage tolerance during secret key updates. Since key updates may happen frequently, improving security guarantees against attackers who obtain leakage during these updates is an important problem. In this work, we present the first cryptographic primitives which are secure against a superlogarithmic amount of leakage during secret key updates. We present signature and public key encryption schemes in the standard model which can tolerate a constant fraction of the secret key to be leaked between updates as well as a constant fraction of the secret key and update randomness to be leaked during updates. Our signature scheme also allows us to leak a constant fraction of the entire secret state during signing. Before this work, it was unknown how to tolerate superlogarithmic leakage during updates even in the random oracle model. We rely on subgroup decision assumptions in composite order bilinear groups. 1
How to protect yourself without perfect shredding
 LNCS
, 2008
"... Abstract. Erasing old data and keys is an important capability of honest parties in cryptographic protocols. It is useful in many settings, including proactive security in the presence of a mobile adversary, adaptive security in the presence of an adaptive adversary, forward security, and intrusion ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. Erasing old data and keys is an important capability of honest parties in cryptographic protocols. It is useful in many settings, including proactive security in the presence of a mobile adversary, adaptive security in the presence of an adaptive adversary, forward security, and intrusion resilience. Some of these settings, such as achieving proactive security, is provably impossible without some form of erasures. Other settings, such as designing protocols that are secure against adaptive adversaries, are much simpler to achieve when erasures are allowed. Protocols for all these contexts typically assume the ability to perfectly erase information. Unfortunately, as amply demonstrated in the systems literature, perfect erasures are hard to implement in practice. We propose a model of imperfect or partial erasures where erasure instructions are only partially effective and leave almost all the data erased intact, thus giving the honest parties only a limited capability to dispose old data. Nonetheless, we show how to design protocols for all of the above settings (including proactive security, adaptive security, forward security, and intrusion resilience) for which this weak form of erasures suffices. We show how to automatically modify protocols relying on perfect erasures to ones for which partial erasures suffices. Stated most generally, we provide a general compiler that transforms any protocol
On forwardsecure storage (Extended Abstract)
 CRYPTO 2006, VOLUME 4117 OF LNCS
, 2006
"... We study a problem of secure data storage in a recently introduced Limited Communication Model. We propose a new cryptographic primitive that we call a ForwardSecure Storage (FSS). This primitive is a special kind of an encryption scheme, which produces huge (5 GB, say) ciphertexts, even from small ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study a problem of secure data storage in a recently introduced Limited Communication Model. We propose a new cryptographic primitive that we call a ForwardSecure Storage (FSS). This primitive is a special kind of an encryption scheme, which produces huge (5 GB, say) ciphertexts, even from small plaintexts, and has the following nonstandard security property. Suppose an adversary gets access to a ciphertext C = E(K, M) and he is allowed to compute any function h of C, with the restriction that h(C)  ≪C  (say: h(C)  =1GB).We require that h(C) should give the adversary no information about M, even if he later learns K. A practical application of this concept is as follows. Suppose a ciphertext C is stored on a machine on which an adversary can install a virus. In many cases it is completely infeasible for the virus to retrieve 1 GB of data from the infected machine. So if the adversary (at some point later) learns K, thenM remains secret. We provide a formal definition of the FSS, propose some FSS schemes, and show that FSS can be composed sequentially in a secure way. We also show connections of the FSS to the theory of compressibility of NPinstances (recently developed by Harnik and Naor).
Another Look at Security Definitions
, 2011
"... Abstract. We take a critical look at security models that are often used to give “provable security ” guarantees. We pay particular attention to digital signatures, symmetrickey encryption, and leakage resilience. We find that there has been a surprising amount of uncertainty about what the “right ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. We take a critical look at security models that are often used to give “provable security ” guarantees. We pay particular attention to digital signatures, symmetrickey encryption, and leakage resilience. We find that there has been a surprising amount of uncertainty about what the “right ” definitions might be. Even when definitions have an appealing logical elegance and nicely reflect certain notions of security, they fail to take into account many types of attacks and do not provide a comprehensive model of adversarial behavior. 1.
On the Insecurity of Parallel Repetition for Leakage Resilience
"... A fundamental question in leakageresilient cryptography is: can leakage resilience always be amplified by parallel repetition? It is natural to expect that if we have a leakageresilient primitive tolerating ℓ bits of leakage, we can take n copies of it to form a system tolerating nℓ bits of leakag ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A fundamental question in leakageresilient cryptography is: can leakage resilience always be amplified by parallel repetition? It is natural to expect that if we have a leakageresilient primitive tolerating ℓ bits of leakage, we can take n copies of it to form a system tolerating nℓ bits of leakage. In this paper, we show that this is not always true. We construct a public key encryption system which is secure when at most ℓ bits are leaked, but if we take n copies of the system and encrypt a share of the message under each using an noutofn secretsharing scheme, leaking nℓ bits renders the system insecure. Our results hold either in composite order bilinear groups under a variant of the subgroup decision assumption or in prime order bilinear groups under the decisional linear assumption. We note that the n copies of our public key systems share a common reference parameter. 1
Parallel Repetition for Leakage Resilience Amplification Revisited
"... Abstract. If a cryptographic primitive remains secure even if ℓ bits about the secret key are leaked to the adversary, one would expect that at least one of n independent instantiations of the scheme remains secure given n·ℓ bits of leakage. This intuition has been proven true for schemes satisfying ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. If a cryptographic primitive remains secure even if ℓ bits about the secret key are leaked to the adversary, one would expect that at least one of n independent instantiations of the scheme remains secure given n·ℓ bits of leakage. This intuition has been proven true for schemes satisfying some special informationtheoretic properties by Alwen et al. [Eurocrypt’10]. On the negative side, Lewko and Waters [FOCS’10] construct a CPA secure publickey encryption scheme for which this intuition fails. The counterexample of Lewko and Waters leaves open the interesting possibility that for any scheme there exists a constant c> 0, such that n fold repetition remains secure against c·n·ℓ bits of leakage. Furthermore, their counterexample requires the n copies of the encryption scheme to share a common reference parameter, leaving open the possibility that the intuition is true for all schemes without common setup. In this work we give a stronger counterexample ruling out these possibilities. We construct a signature scheme such that: 1. a single instantiation remains secure given ℓ = log(k) bits of leakage where k is a security parameter. 2. any polynomial number of independent instantiations can be broken (in the strongest sense of keyrecovery) given ℓ ′ = poly(k) bits of leakage. Note that ℓ ′ does not depend on the number of instances. The computational assumption underlying our counterexample is that noninteractive computationally sound proofs exist. Moreover, under a stronger (nonstandard) assumption about such proofs, our counterexample does not require a common reference parameter. The underlying idea of our counterexample is rather generic and can be applied to other primitives like encryption schemes. 1