Results 1  10
of
184
Remote Timing Attacks are Practical
 In Proceedings of the 12th USENIX Security Symposium
, 2003
"... Timing attacks are usually used to attack weak computing devices such as smartcards. We show that timing attacks apply to general software systems. Specifically, we devise a timing attack against OpenSSL. Our experiments show that we can extract private keys from an OpenSSLbased web server runni ..."
Abstract

Cited by 181 (4 self)
 Add to MetaCart
Timing attacks are usually used to attack weak computing devices such as smartcards. We show that timing attacks apply to general software systems. Specifically, we devise a timing attack against OpenSSL. Our experiments show that we can extract private keys from an OpenSSLbased web server running on a machine in the local network. Our results demonstrate that timing attacks against network servers are practical and therefore all security systems should defend against them.
Least we remember: Cold boot attacks on encryption keys
 In USENIX Security Symposium
, 2008
"... For the most recent version of this paper, answers to frequently asked questions, and videos of demonstration attacks, visit ..."
Abstract

Cited by 117 (3 self)
 Add to MetaCart
For the most recent version of this paper, answers to frequently asked questions, and videos of demonstration attacks, visit
Simultaneous hardcore bits and cryptography against memory attacks
 IN TCC
, 2009
"... This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptograp ..."
Abstract

Cited by 77 (8 self)
 Add to MetaCart
This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptographic algorithm can be measured by an adversary if the secret key is ever stored in a part of memory which can be accessed even after power has been turned off for a short amount of time. Such an attack has been shown to completely compromise the security of various cryptosystems in use, including the RSA cryptosystem and AES. We show that the publickey encryption scheme of Regev (STOC 2005), and the identitybased encryption scheme of Gentry, Peikert and Vaikuntanathan (STOC 2008) are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secretkey, or more generally, can compute an arbitrary function of the secretkey of bounded output length. This is done without increasing the size of the secretkey, and without introducing any
URSA: Ubiquitous and Robust Access Control for Mobile AdHoc Networks
 IEEE/ACM Transactions on Networking
, 2004
"... Restricting network access of routing and packet forwarding to wellbehaving nodes, and denying access from misbehaving nodes are critical for the proper functioning of a mobile adhoc network where cooperation among all networking nodes is usually assumed. However, the lack of a network infrastruct ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
Restricting network access of routing and packet forwarding to wellbehaving nodes, and denying access from misbehaving nodes are critical for the proper functioning of a mobile adhoc network where cooperation among all networking nodes is usually assumed. However, the lack of a network infrastructure, the dynamics of the network topology and node membership, and the potential attacks from inside the network by malicious and/or noncooperative selfish nodes make the conventional network access control mechanisms not applicable. We present URSA, a ubiquitous and robust access control solution for mobile adhoc networks. URSA implements ticket certification services through multiplenode consensus and fully localized instantiation, and uses tickets to identify and grant network access to wellbehaving nodes. In URSA, no single node monopolizes the access decision or is completely trusted, and multiple nodes jointly monitor a local node and certify/revoke its ticket. Furthermore, URSA ticket certification services are fully localized into each node's neighborhood to ensure service ubiquity and resilience. Through analysis, simulations and experiments, we show that our design effectively enforces access control in the highly dynamic, mobile adhoc network.
The shortest vector in a lattice is hard to approximate to within some constant
 in Proc. 39th Symposium on Foundations of Computer Science
, 1998
"... Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (r ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (random polynomial time), unless NP equals RP. We also prove a proper NPhardness result (i.e., hardness under deterministic manyone reductions) under a reasonable number theoretic conjecture on the distribution of squarefree smooth numbers. As part of our proof, we give an alternative construction of Ajtai’s constructive variant of Sauer’s lemma that greatly simplifies Ajtai’s original proof. Key words. NPhardness, shortest vector problem, point lattices, geometry of numbers, sphere packing
Noisy Polynomial Interpolation and Noisy Chinese Remaindering
, 2000
"... Abstract. The noisy polynomial interpolation problem is a new intractability assumption introduced last year in oblivious polynomial evaluation. It also appeared independently in password identification schemes, due to its connection with secret sharing schemes based on Lagrange’s polynomial interpo ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
Abstract. The noisy polynomial interpolation problem is a new intractability assumption introduced last year in oblivious polynomial evaluation. It also appeared independently in password identification schemes, due to its connection with secret sharing schemes based on Lagrange’s polynomial interpolation. This paper presents new algorithms to solve the noisy polynomial interpolation problem. In particular, we prove a reduction from noisy polynomial interpolation to the lattice shortest vector problem, when the parameters satisfy a certain condition that we make explicit. Standard lattice reduction techniques appear to solve many instances of the problem. It follows that noisy polynomial interpolation is much easier than expected. We therefore suggest simple modifications to several cryptographic schemes recently proposed, in order to change the intractability assumption. We also discuss analogous methods for the related noisy Chinese remaindering problem arising from the wellknown analogy between polynomials and integers. 1
Lattice attacks on digital signature schemes
 Designs, Codes and Cryptography
, 1999
"... digital signatures, lattices * Internal Accession Date Only © Copyright HewlettPackard Company 1999 We describe a lattice attack on the Digital Signature Algorithm (DSA) when used to sign many messages, mi, under the assumption that a proportion of the bits of each of the associated ephemeral keys, ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
digital signatures, lattices * Internal Accession Date Only © Copyright HewlettPackard Company 1999 We describe a lattice attack on the Digital Signature Algorithm (DSA) when used to sign many messages, mi, under the assumption that a proportion of the bits of each of the associated ephemeral keys, yi, can be recovered by alternative techniques.
FloatingPoint LLL Revisited
, 2005
"... The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L³) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L³ outputs a socalled L³reduced basis in po ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L³) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L³ outputs a socalled L³reduced basis in polynomial time O(d 5 n log³ B), using arithmetic operations on integers of bitlength O(d log B). This worstcase complexity is problematic for lattices arising in cryptanalysis where d or/and log B are often large. As a result, the original L³ is almost never used in practice. Instead, one applies floatingpoint variants of L³, where the longinteger arithmetic required by GramSchmidt orthogonalisation (central in L³) is replaced by floatingpoint arithmetic. Unfortunately, this is known to be unstable in the worstcase: the usual floatingpoint L³ is not even guaranteed to terminate, and the output basis may not be L³reduced at all. In this article, we introduce the L² algorithm, a new and natural floatingpoint variant of L³ which provably outputs L 3reduced bases in polynomial time O(d 4 n(d + log B) log B). This is the first L³ algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like the wellknown Euclidean and Gaussian algorithms, which it generalizes.
LLL on the Average
, 2006
"... Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worstcase proved bounds, both in terms of the running time and the output quality. In this article, we investigate t ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worstcase proved bounds, both in terms of the running time and the output quality. In this article, we investigate this puzzling statement by trying to model the average case of lattice reduction algorithms, starting with the celebrated LenstraLenstraLovász algorithm (L³). We discuss what is meant by lattice reduction on the average, and we present extensive experiments on the average case behavior of L³, in order to give a clearer picture of the differences/similarities between the average and worst cases. Our work is intended to clarify the practical behavior of L³ and to raise theoretical questions on its average behavior.