Results 1  10
of
19
PublicKey Cryptosystems Resilient to Key Leakage
"... Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture sidechannel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent sidec ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture sidechannel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent sidechannel attacks, especially the “cold boot attacks ” of Halderman et al. (USENIX Security ’08), Akavia, Goldwasser and Vaikuntanathan (TCC ’09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of sidechannel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of publickey encryption, Akavia et al. showed that Regev’s latticebased scheme (STOC ’05) is resilient to any leakage of
Using LLLReduction for Solving RSA and Factorization Problems: A Survey
, 2007
"... 25 years ago, Lenstra, Lenstra and Lovasz presented their celebrated LLL lattice reduction algorithm. Among the various applications of the LLL algorithm is a method due to Coppersmith for finding small roots of polynomial equations. We give a survey of the applications of this root finding method ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
25 years ago, Lenstra, Lenstra and Lovasz presented their celebrated LLL lattice reduction algorithm. Among the various applications of the LLL algorithm is a method due to Coppersmith for finding small roots of polynomial equations. We give a survey of the applications of this root finding method to the problem of inverting the RSA function and the factorization problem. As we will see, most of the results are of a dual nature: They can either be interpreted as cryptanalytic results or as hardness/security results.
Reconstructing rsa private keys from random key bits
 In CRYPTO
, 2009
"... We show that an RSA private key with small public exponent can be efficiently recovered given a 0.27 fraction of its bits at random. An important application of this work is to the “cold boot ” attacks of Halderman et al. We make new observations about the structure of RSA keys that allow our algori ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We show that an RSA private key with small public exponent can be efficiently recovered given a 0.27 fraction of its bits at random. An important application of this work is to the “cold boot ” attacks of Halderman et al. We make new observations about the structure of RSA keys that allow our algorithm to make use of the redundant information in the typical storage format of an RSA private key. Our algorithm itself is elementary and does not make use of the lattice techniques used in other RSA key reconstruction problems. We give an analysis of the running time behavior of our algorithm that matches the threshold phenomenon observed in our experiments. 1
Reducing lattice bases to find smallheight values of univariate polynomials
 in [13] (2007). URL: http://cr.yp.to/papers.html#smallheight. Citations in this document: §A
, 2004
"... Abstract. This paper generalizes several previous results on finding divisors in residue classes (Lenstra, Konyagin, Pomerance, Coppersmith, HowgraveGraham, Nagaraj), finding divisors in intervals (Rivest, Shamir, Coppersmith, HowgraveGraham), finding modular roots (Hastad, Vallée, Girault, Toffin ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Abstract. This paper generalizes several previous results on finding divisors in residue classes (Lenstra, Konyagin, Pomerance, Coppersmith, HowgraveGraham, Nagaraj), finding divisors in intervals (Rivest, Shamir, Coppersmith, HowgraveGraham), finding modular roots (Hastad, Vallée, Girault, Toffin, Coppersmith, HowgraveGraham), finding highpower divisors (Boneh, Durfee, HowgraveGraham), and finding codeword errors beyond half distance (Sudan, Guruswami, Goldreich, Ron, Boneh) into a unified algorithm that, given f and g, finds all rational numbers r such that f(r) and g(r) both have small height. 1.
On The Oracle Complexity Of Factoring Integers
 COMPUTATIONAL COMPLEXITY
, 1996
"... The problem of factoring integers in polynomial time with the help of an (infinitely powerful) oracle who answers arbitrary questions with yes or no is considered. The goal is to minimize the number of oracle questions. Let N be a given composite nbit integer to be factored, where n = dlog 2 ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The problem of factoring integers in polynomial time with the help of an (infinitely powerful) oracle who answers arbitrary questions with yes or no is considered. The goal is to minimize the number of oracle questions. Let N be a given composite nbit integer to be factored, where n = dlog 2 Ne. The trivial method of asking for the bits of the smallest prime factor of N requires n/2 questions in the worst case. A nontrivial algorithm of Rivest and Shamir requires only n/3 questions for the special case where N is the product of two n/2bit primes. In this paper, a polynomialtime oracle factoring algorithm for general integers is presented which, for any ffl ? 0, asks at most ffln oracle questions for sufficiently large N , thus solving an open problem posed by Rivest and Shamir. Based on a plausible conjecture related to Lenstra's conjecture on the running time of the elliptic curve factoring algorithm it is shown that the algorithm fails with probability at most N ...
Implicit Factoring with Shared Most Significant and Middle Bits
"... The corresponding paper version of this extended abstract is accepted for PKC2010 [3] The problem of factoring integers given additional information about their factors has been studied since 1985. In [6], Rivest and Shamir showed that N = pq of bitsize n and with balanced factors (log2(p) ≈ log2 ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The corresponding paper version of this extended abstract is accepted for PKC2010 [3] The problem of factoring integers given additional information about their factors has been studied since 1985. In [6], Rivest and Shamir showed that N = pq of bitsize n and with balanced factors (log2(p) ≈ log2 (q) ≈ n 2) can be factored in polynomial time as soon as we have access to an oracle that returns the n 3 most significant bits (MSBs) of p. Beyond its theoretical interest, the motivation behind this is mostly of cryptographic nature. In fact, during an attack of an RSAencrypted exchange, the cryptanalyst may have access to additional information beyond the RSA public parameters (e,N), that may be gained for instance through sidechannel attacks revealing some of the bits of the secret factors. Besides, some variations of the RSA Cryptosystem purposely leak some of the secret bits (for instance, [8]). In 1996, Rivest and Shamir’s results were improved in [2] by Coppersmith applying latticebased methods to the problem of finding small integer roots of bivariate integer polynomials (the now socalled Coppersmith’s method). It requires only half of the most significant bits of p to be known to the cryptanalyst (that is n 4). In PKC 2009, May and Ritzenhofen [5] significantly reduced the power of the oracle. Given an RSA
Correcting Errors in RSA Private Keys
"... Abstract. Let pk = (N, e) be an RSA public key with corresponding secret key sk = (p, q, d, dp, dq, q −1 p). Assume that we obtain partial errorfree information of sk, e.g., assume that we obtain half of the most significant bits of p. Then there are wellknown algorithms to recover the full secret ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Let pk = (N, e) be an RSA public key with corresponding secret key sk = (p, q, d, dp, dq, q −1 p). Assume that we obtain partial errorfree information of sk, e.g., assume that we obtain half of the most significant bits of p. Then there are wellknown algorithms to recover the full secret key. As opposed to these algorithms that allow for correcting erasures of the key sk, we present for the first time a heuristic probabilistic algorithm that is capable of correcting errors in sk provided that e is small. That is, on input of a full but errorprone secret key ˜ sk we reconstruct the original sk by correcting the faults. More precisely, consider an error rate of δ ∈ [0, 1), where we flip each bit 2 in sk with probability δ resulting in an erroneous key ˜ sk. Our LasVegas type algorithm allows to recover sk from ˜ sk in expected time polynomial in log N with success probability close to 1, provided that δ < 0.237. We also obtain a polynomial time LasVegas factorization algorithm for recovering the factorization (p, q) from an erroneous version with error rate δ < 0.084. Keywords. RSA, error correction, statistical cryptanalysis
A CodingTheoretic Approach to Recovering Noisy RSA
"... Abstract. Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were exte ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were extended by Henecka, May and Meurer (Crypto 2010) to produce an algorithm that works when all the key bits are subject to error. In this paper, we bring a codingtheoretic viewpoint to bear on the problem of noisy RSA key recovery. This viewpoint allows us to cast the previous work as part of a more general framework. In turn, this enables us to explain why the previous algorithms do not solve the motivating cold boot problem, and to design a new algorithm that does (and more). In addition, we are able to use concepts and tools from coding theory – channel capacity, list decoding algorithms, and random coding techniques – to derive bounds on the performance of the previous algorithms and our new algorithm.
Speeding Up Bipartite Modular Multiplication
"... Abstract. A large set of moduli, for which the speed of bipartite modular multiplication considerably increases, is proposed in this work. By considering state of the art attacks on publickey cryptosystems, we show that the proposed set is safe to use in practice for both elliptic curve cryptograph ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. A large set of moduli, for which the speed of bipartite modular multiplication considerably increases, is proposed in this work. By considering state of the art attacks on publickey cryptosystems, we show that the proposed set is safe to use in practice for both elliptic curve cryptography and RSA cryptosystems. We propose a hardware architecture for the modular multiplier that is based on our method. The results show that, concerning the speed, our proposed architecture outperforms the modular multiplier based on standard bipartite modular multiplication. Additionally, our design consumes less area compared to the standard solutions.
Factoring Unbalanced Moduli with Known Bits
"... Abstract. Let n = pq> q 3 be an rsa modulus. This note describes a lllbased method allowing to factor n given 2 log2 q contiguous bits of p, irrespective to their position. A second method is presented, which needs fewer bits but whose length depends on the position of the known bit pattern. Finall ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Let n = pq> q 3 be an rsa modulus. This note describes a lllbased method allowing to factor n given 2 log2 q contiguous bits of p, irrespective to their position. A second method is presented, which needs fewer bits but whose length depends on the position of the known bit pattern. Finally, we introduce a somewhat surprising ad hoc method where two different known bit chunks, totalling 3 2 log2 q bits suffice to factor n. 1