Results 1  10
of
17
Distributed MatrixFree Solution of Large Sparse Linear Systems over Finite Fields
 Algorithmica
, 1996
"... We describe a coarsegrain parallel software system for the homogeneous solution of linear systems. Our solutions are symbolic, i.e., exact rather than numerical approximations. Our implementation can be run on a network cluster of SPARC20 computers and on an SP2 multiprocessor. Detailed timings a ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
We describe a coarsegrain parallel software system for the homogeneous solution of linear systems. Our solutions are symbolic, i.e., exact rather than numerical approximations. Our implementation can be run on a network cluster of SPARC20 computers and on an SP2 multiprocessor. Detailed timings are presented for experiments with systems that arise in RSA challenge integer factoring efforts. For example, we can solve a 252; 222 \Theta 252; 222 system with about 11.04 million nonzero entries over the Galois field with 2 elements using 4 processors of an SP2 multiprocessor, in about 26.5 hours CPU time. 1 Introduction The problem of solving large, unstructured, sparse linear systems using exact arithmetic arises in symbolic linear algebra and computational number theory. For example the sievebased factoring of large integers can lead to systems containing over 569,000 equations and variables and over 26.5 million nonzero entries, that need to be solved over the Galois field of two...
NFS with Four Large Primes: An Explosive Experiment
, 1995
"... The purpose of this paper is to report the unexpected results that we obtained while experimenting with the multilarge prime variation of the general number field sieve integer factoring algorithm (NFS, cf. [8]). For traditional factoring algorithms that make use of at most two large primes, the ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
The purpose of this paper is to report the unexpected results that we obtained while experimenting with the multilarge prime variation of the general number field sieve integer factoring algorithm (NFS, cf. [8]). For traditional factoring algorithms that make use of at most two large primes, the completion time can quite accurately be predicted by extrapolating an almost quartic and entirely ‘smooth ’ function that counts the number of useful combinations among the large primes [l]. For NFS such extrapolations seem to be impossiblethe number of useful combinations suddenly ‘explodes ’ in an as yet unpredictable way, that we have not yet been able to understand completely. The consequence of this explosion is that NFS is substantially faster than expected, which implies that factoring is somewhat easier than we thought.
Strategies in Filtering in the Number Field Sieve
 In preparation
, 2000
"... A critical step when factoring large integers by the Number Field Sieve [8] consists of finding dependencies in a huge sparse matrix over the field F2 , using a Block Lanczos algorithm. Both size and weight (the number of nonzero elements) of the matrix critically affect the running time of Block ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
A critical step when factoring large integers by the Number Field Sieve [8] consists of finding dependencies in a huge sparse matrix over the field F2 , using a Block Lanczos algorithm. Both size and weight (the number of nonzero elements) of the matrix critically affect the running time of Block Lanczos. In order to keep size and weight small the relations coming out of the siever do not flow directly into the matrix, but are filtered first in order to reduce the matrix size. This paper discusses several possible filter strategies and their use in the recent record factorizations of RSA140, R211 and RSA155. 2000 Mathematics Subject Classification: Primary 11Y05. Secondary 11A51. 1999 ACM Computing Classification System: F.2.1. Keywords and Phrases: Number Field Sieve, factoring, filtering, Structured Gaussian elimination, Block Lanczos, RSA. Note: Work carried out under project MAS2.2 "Computational number theory and data security". This report will appear in the proceed...
Cohomology of congruence subgroups of SL4(Z
 J. Number Theory
"... Abstract. In two previous papers we computed cohomology groups H5 (Γ0(N); C) for a range of levels N, where Γ0(N) is the congruence subgroup of SL4(Z) consisting of all matrices with bottom row congruent to (0, 0, 0, ∗) mod N. In this note we update this earlier work by carrying it out for prime lev ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Abstract. In two previous papers we computed cohomology groups H5 (Γ0(N); C) for a range of levels N, where Γ0(N) is the congruence subgroup of SL4(Z) consisting of all matrices with bottom row congruent to (0, 0, 0, ∗) mod N. In this note we update this earlier work by carrying it out for prime levels up to N = 211. This requires new methods in sparse matrix reduction, which are the main focus of the paper. Our computations involve matrices with up to 20 million nonzero entries. We also make two conjectures concerning the contributions to H5 (Γ0(N); C) forNprime coming from Eisenstein series and Siegel modular forms. 1.
Achieving a log(n) Speed Up for Boolean Matrix Operations and Calculating The Complexity of the Dense Linear Algebra step of Algebraic Stream Ciper Attacks and of Integer Factorization Methods
 IACR EPRINT
, 2006
"... The purpose of this paper is to calculate the running time of dense boolean matrix operations, as used in stream cipher cryptanalysis and integer factorization. Several variations of Gaussian Elimination, Strassen's Algorithm and the Method of Four Russians are analyzed. In particular, we demonstra ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The purpose of this paper is to calculate the running time of dense boolean matrix operations, as used in stream cipher cryptanalysis and integer factorization. Several variations of Gaussian Elimination, Strassen's Algorithm and the Method of Four Russians are analyzed. In particular, we demonstrate that Strassen's Algorithm is actually slower than the Four Russians algorithm for matrices of the sizes encountered in these problems. To accomplish this, we introduce a new model for tabulating the running time, tracking matrix reads and writes rather than field operations, and retaining the coefficients rather than dropping them. Furthermore, we introduce an algorithm known heretofore only orally, a "Modified Method of Four Russians", which has not appeared in the literature before. This algorithm is log n times faster than Gaussian Elimination for dense boolean matrices. Finally we list rough estimates for the running time of several recent stream cipher cryptanalysis attacks.
Integer Factoring
, 2000
"... Using simple examples and informal discussions this article surveys the key ideas and major advances of the last quarter century in integer factorization. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Using simple examples and informal discussions this article surveys the key ideas and major advances of the last quarter century in integer factorization.
The Magic Words Are Squeamish Ossifrage (Extended Abstract)
"... We describe the computation which resulted in the title of this paper. Furthermore, we give an analysis of the data collected during this computation. From these data, we derive the important observation that in the final stages, the progress of the double large prime variation of the quadratic siev ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We describe the computation which resulted in the title of this paper. Furthermore, we give an analysis of the data collected during this computation. From these data, we derive the important observation that in the final stages, the progress of the double large prime variation of the quadratic sieve integer factoring algorithm can more effectively be approximated by a quartic function of the time spent, than by the more familiar quadratic function. We also present, as an update to [15], some of our experiences with the management of a large computation distributed over the Internet. Based on this experience, we give some realistic estimates of the current readily available computational power of the Internet. We conclude that commonlyused 512bit RSA moduli are vulnerable to any organization prepared to spend a few million dollars and to wait a few months.
Accelerating Iterative SpMV for Discrete Logarithm Problem using GPUs
, 2013
"... In the context of cryptanalysis, computing discrete logarithms in large cyclic groups using indexcalculusbased methods, such as the number field sieve or the function field sieve, requires solving large sparse systems of linear equations modulo the group order. Most of the fast algorithms used to ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In the context of cryptanalysis, computing discrete logarithms in large cyclic groups using indexcalculusbased methods, such as the number field sieve or the function field sieve, requires solving large sparse systems of linear equations modulo the group order. Most of the fast algorithms used to solve such systems — e.g., the conjugate gradient or the Lanczos and Wiedemann algorithms — iterate a product of the corresponding sparse matrix with a vector (SpMV). This central operation can be accelerated on GPUs using specific computing models and addressing patterns, which increase the arithmetic intensity while reducing irregular memory accesses. In this work, we investigate the implementation of SpMV kernels on NVIDIA GPUs, for several representations of the sparse matrix in memory. We explore the use of Residue Number System (RNS) arithmetic to accelerate modular operations. We target linear systems arising when attacking the discrete logarithm problem on groups of size 160 to 320 bits, which are relevant for current cryptanalytic computations. The proposed SpMV implementation contributed to solving the discrete logarithm problem in GF(2 619) and GF(2 809) using the FFS algorithm.
Parallel Solution of Sparse Linear Systems Defined over GF(p)
"... Introduction The security of modern publickey cryptography is usually based on the presumed hardness of problems such as factoring integers or computing discrete logarithms. The Number Field Sieve [19] (NFS) and Function Field Sieve [1] (FFS) oer two examples of algorithms that can attack these pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Introduction The security of modern publickey cryptography is usually based on the presumed hardness of problems such as factoring integers or computing discrete logarithms. The Number Field Sieve [19] (NFS) and Function Field Sieve [1] (FFS) oer two examples of algorithms that can attack these problems. Such algorithms are generally speci ed in two phases. The rst phase, sometimes called the sieving step, aims to collect many relations that represent small items of information about the problem one is trying to solve. This phase is easy to parallelise since one can generate the relations independently. It is therefore attractive for distributed, Internet based collaborative computation [26]. The second phase of processing, sometimes called the matrix step, aims to collect the relations and combine them into a single linear system which, when solved, allows one to eciently compute answers to the original problem. Ecient implementation of the matrix step is challenging since the li
Computation of discrete logarithms in F2607
 In Advances in Cryptology (AsiaCrypt 2001), Springer LNCS 2248
"... Abstract. We describe in this article how we have been able to extend the record for computationsof discrete logarithmsin characteristic 2 from the previousrecord over F 2 503 to a newer mark of F 2 607, using Coppersmith’s algorithm. This has been made possible by several practical improvementsto t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We describe in this article how we have been able to extend the record for computationsof discrete logarithmsin characteristic 2 from the previousrecord over F 2 503 to a newer mark of F 2 607, using Coppersmith’s algorithm. This has been made possible by several practical improvementsto the algorithm. Although the computationshave been carried out on fairly standard hardware, our opinion is that we are nearing the current limitsof the manageable sizesfor thisalgorithm, and that going substantially further will require deeper improvements to the method. 1