Results 1  10
of
18
Reconstructing rsa private keys from random key bits
 In CRYPTO
, 2009
"... We show that an RSA private key with small public exponent can be efficiently recovered given a 0.27 fraction of its bits at random. An important application of this work is to the “cold boot ” attacks of Halderman et al. We make new observations about the structure of RSA keys that allow our algori ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
We show that an RSA private key with small public exponent can be efficiently recovered given a 0.27 fraction of its bits at random. An important application of this work is to the “cold boot ” attacks of Halderman et al. We make new observations about the structure of RSA keys that allow our algorithm to make use of the redundant information in the typical storage format of an RSA private key. Our algorithm itself is elementary and does not make use of the lattice techniques used in other RSA key reconstruction problems. We give an analysis of the running time behavior of our algorithm that matches the threshold phenomenon observed in our experiments. 1
Faster Algorithms for Approximate Common Divisors: Breaking FullyHomomorphicEncryption Challenges over the Integers
 In Eurocrypto 2012
"... At EUROCRYPT ’10, van Dijk, Gentry, Halevi and Vaikuntanathan presented simple fullyhomomorphic encryption (FHE) schemes based on the hardness of approximate integer common divisors problems, which were introduced in 2001 by HowgraveGraham. There are two versions for these problems: the partial ve ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
At EUROCRYPT ’10, van Dijk, Gentry, Halevi and Vaikuntanathan presented simple fullyhomomorphic encryption (FHE) schemes based on the hardness of approximate integer common divisors problems, which were introduced in 2001 by HowgraveGraham. There are two versions for these problems: the partial version (PACD) and the general version (GACD). The seemingly easier problem PACD was recently used by Coron, Mandal, Naccache and Tibouchi at CRYPTO ’11 to build a more efficient variant of the FHE scheme by van Dijk et al.. We present a new PACD algorithm whose running time is essentially the “square root ” of that of exhaustive search, which was the best attack in practice. This allows us to experimentally break the FHE challenges proposed by Coron et al. Our PACD algorithm directly gives rise to a new GACD algorithm, which is exponentially faster than exhaustive search: namely, the running time is essentially the 3/4th root of that of exhaustive search. Interestingly, our main technique can also be applied to other settings, such as noisy factoring, fault attacks on CRTRSA signatures, and attacking lowexponent RSA encryption. 1
Parallel Shortest Lattice Vector Enumeration on Graphics Cards
, 2010
"... In this paper we present an algorithm for parallel exhaustive search for short vectors in lattices. This algorithm can be applied to a wide range of parallel computing systems. To illustrate the algorithm, it was implemented on graphics cards using CUDA, a programming framework for NVIDIA graphics ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
In this paper we present an algorithm for parallel exhaustive search for short vectors in lattices. This algorithm can be applied to a wide range of parallel computing systems. To illustrate the algorithm, it was implemented on graphics cards using CUDA, a programming framework for NVIDIA graphics cards. We gain large speedups compared to previous serial CPU implementations. Our implementation is almost 5 times faster in high lattice dimensions. Exhaustive search is one of the main building blocks for lattice basis reduction in cryptanalysis. Our work results in an advance in practical lattice reduction.
FloatingPoint LLL: Theoretical and Practical Aspects
"... The textbook LLL algorithm can be sped up considerably by replacing the underlying rational arithmetic used for the GramSchmidt orthogonalisation by floatingpoint approximations. We review how this modification has been and is currently implemented, both in theory and in practice. Using floating ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The textbook LLL algorithm can be sped up considerably by replacing the underlying rational arithmetic used for the GramSchmidt orthogonalisation by floatingpoint approximations. We review how this modification has been and is currently implemented, both in theory and in practice. Using floatingpoint approximations seems to be natural for LLL even from the theoretical point of view: it is the key to reach a bitcomplexity which is quadratic with respect to the bitlength of the input vectors entries, without fast integer multiplication. The latter bitcomplexity strengthens the connection between LLL and Euclid’s gcd algorithm. On the practical side, the LLL implementer may weaken the provable variants in order to further improve their efficiency: we emphasise on these techniques. We also consider the practical behaviour of the floatingpoint LLL algorithms, in particular their output distribution, their runningtime and their numerical behaviour. After 25 years of implementation, many questions motivated by the practical side of LLL remain open.
Accelerating lattice reduction with FPGAs
 IN PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON PROGRESS IN CRYPTOLOGY: CRYPTOLOGY AND INFORMATION SECURITY IN LATIN
, 2010
"... We describe an FPGA accelerator for the Kannan–Fincke– Pohst enumeration algorithm (KFP) solving the Shortest Lattice Vector Problem (SVP). This is the first FPGA implementation of KFP specifically targeting cryptographically relevant dimensions. In order to optimize this implementation, we theoreti ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We describe an FPGA accelerator for the Kannan–Fincke– Pohst enumeration algorithm (KFP) solving the Shortest Lattice Vector Problem (SVP). This is the first FPGA implementation of KFP specifically targeting cryptographically relevant dimensions. In order to optimize this implementation, we theoretically and experimentally study several facets of KFP, including its efficient parallelization and its underlying arithmetic. Our FPGA accelerator can be used for both solving standalone instances of SVP (within a hybrid CPU–FPGA compound) or myriads of smaller dimensional SVP instances arising in a BKZtype algorithm. For devices of comparable costs, our FPGA implementation is faster than a multicore CPU implementation by a factor around 2.12.
Further Results on Implicit Factoring in Polynomial Time
"... Abstract. In PKC 2009, May and Ritzenhofen presented interesting problems related to factoring large integers with some implicit hints. One of the problems is as follows. Consider N1 = p1q1 and N2 = p2q2, where p1, p2, q1, q2 are large primes. The primes p1, p2 are of same bitsize with the constrai ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In PKC 2009, May and Ritzenhofen presented interesting problems related to factoring large integers with some implicit hints. One of the problems is as follows. Consider N1 = p1q1 and N2 = p2q2, where p1, p2, q1, q2 are large primes. The primes p1, p2 are of same bitsize with the constraint that certain amount of Least Significant Bits (LSBs) of p1, p2 are same. Further the primes q1, q2 are of same bitsize without any constraint. May and Ritzenhofen proposed a strategy to factorize both N1, N2 in poly(log N) time (N is an integer with same bitsize as N1, N2) with the implicit information that p1, p2 share certain amount of LSBs. We explore the same problem with a different latticebased strategy. In a general framework, our method works when implicit information is available related to Least Significant as well as Most Significant Bits (MSBs). Given q1, q2 ≈ N α, we show that one can factor N1, N2 simultaneously in poly(log N) time (under some assumption related to Gröbner Basis) when p1, p2 share certain amount of MSBs and/or LSBs. We also study the case when p1, p2 share some bits in the middle. Our strategy presents new and encouraging results in this direction. Moreover, some of the observations by May and Ritzenhofen get improved when we apply our ideas for the LSB case. Keywords: Implicit Information, Prime Factorization. 1
A CodingTheoretic Approach to Recovering Noisy RSA
"... Abstract. Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were exte ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were extended by Henecka, May and Meurer (Crypto 2010) to produce an algorithm that works when all the key bits are subject to error. In this paper, we bring a codingtheoretic viewpoint to bear on the problem of noisy RSA key recovery. This viewpoint allows us to cast the previous work as part of a more general framework. In turn, this enables us to explain why the previous algorithms do not solve the motivating cold boot problem, and to design a new algorithm that does (and more). In addition, we are able to use concepts and tools from coding theory – channel capacity, list decoding algorithms, and random coding techniques – to derive bounds on the performance of the previous algorithms and our new algorithm.
Probabilistic analysis of LLL reduced bases
 in Proc. WEWoRC 2009
, 2010
"... Abstract. Lattice reduction algorithms behave much better in practice than their theoretical analysis predicts, with respect to output quality and runtime. In this paper we present a probabilistic analysis that proves an average case bound for the length of the first basis vector of an LLL reduced b ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Lattice reduction algorithms behave much better in practice than their theoretical analysis predicts, with respect to output quality and runtime. In this paper we present a probabilistic analysis that proves an average case bound for the length of the first basis vector of an LLL reduced bases which reflects LLL experiments much better.
Faster interleaved modular multiplication based on Barrett and Montgomery reduction methods
 1715–1721, 2010, [Online] Available: http://dx.doi.org/10.1109/TC.2010.93
"... IEEE Abstract—This paper proposes two improved interleaved modular multiplication algorithms based on Barrett and Montgomery modular reduction. The algorithms are simple and especially suitable for hardware implementations. Four large sets of moduli for which the proposed methods apply are given and ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
IEEE Abstract—This paper proposes two improved interleaved modular multiplication algorithms based on Barrett and Montgomery modular reduction. The algorithms are simple and especially suitable for hardware implementations. Four large sets of moduli for which the proposed methods apply are given and analyzed from a security point of view. By considering stateoftheart attacks on publickey cryptosystems, we show that the proposed sets are safe to use, in practice, for both elliptic curve cryptography and RSA cryptosystems. We propose a hardware architecture for the modular multiplier that is based on our methods. The results show that concerning the speed, our proposed architecture outperforms the modular multiplier based on standard modular multiplication by more than 50 percent. Additionally, our design consumes less area compared to the standard solutions. Index Terms—Modular multiplication, Barrett reduction, Montgomery reduction, publickey cryptography.
Speeding Up Bipartite Modular Multiplication
"... Abstract. A large set of moduli, for which the speed of bipartite modular multiplication considerably increases, is proposed in this work. By considering state of the art attacks on publickey cryptosystems, we show that the proposed set is safe to use in practice for both elliptic curve cryptograph ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A large set of moduli, for which the speed of bipartite modular multiplication considerably increases, is proposed in this work. By considering state of the art attacks on publickey cryptosystems, we show that the proposed set is safe to use in practice for both elliptic curve cryptography and RSA cryptosystems. We propose a hardware architecture for the modular multiplier that is based on our method. The results show that, concerning the speed, our proposed architecture outperforms the modular multiplier based on standard bipartite modular multiplication. Additionally, our design consumes less area compared to the standard solutions.