Results 1  10
of
60
A New Efficient Algorithm for Computing Gröbner Bases (F4)
 IN: ISSAC ’02: PROCEEDINGS OF THE 2002 INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND ALGEBRAIC COMPUTATION
, 2002
"... This paper introduces a new efficient algorithm for computing Gröbner bases. To avoid as much as possible intermediate computation, the algorithm computes successive truncated Gröbner bases and it replaces the classical polynomial reduction found in the Buchberger algorithm by the simultaneous reduc ..."
Abstract

Cited by 366 (57 self)
 Add to MetaCart
(Show Context)
This paper introduces a new efficient algorithm for computing Gröbner bases. To avoid as much as possible intermediate computation, the algorithm computes successive truncated Gröbner bases and it replaces the classical polynomial reduction found in the Buchberger algorithm by the simultaneous reduction of several polynomials. This powerful reduction mechanism is achieved by means of a symbolic precomputation and by extensive use of sparse linear algebra methods. Current techniques in linear algebra used in Computer Algebra are reviewed together with other methods coming from the numerical field. Some previously untractable problems (Cyclic 9) are presented as well as an empirical comparison of a first implementation of this algorithm with other well known programs. This comparison pays careful attention to methodology issues. All the benchmarks and CPU times used in this paper are frequently updated and available on a Web page. Even though the new algorithm does not improve the worst case complexity it is several times faster than previous implementations both for integers and modulo computations.
Factoring Multivariate Polynomials via Partial Differential Equations
 Math. Comput
, 2000
"... A new method is presented for factorization of bivariate polynomials over any field of characteristic zero or of relatively large characteristic. It is based on a simple partial differential equation that gives a system of linear equations. Like Berlekamp's and Niederreiter's algorithms fo ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
A new method is presented for factorization of bivariate polynomials over any field of characteristic zero or of relatively large characteristic. It is based on a simple partial differential equation that gives a system of linear equations. Like Berlekamp's and Niederreiter's algorithms for factoring univariate polynomials, the dimension of the solution space of the linear system is equal to the number of absolutely irreducible factors of the polynomial to be factored and any basis for the solution space gives a complete factorization by computing gcd's and by factoring univariate polynomials over the ground field. The new method finds absolute and rational factorizations simultaneously and is easy to implement for finite fields, local fields, number fields, and the complex number field. The theory of the new method allows an effective Hilbert irreducibility theorem, thus an efficient reduction of polynomials from multivariate to bivariate.
Parallel Algorithms for Integer Factorisation
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends o ..."
Abstract

Cited by 44 (17 self)
 Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiplepolynomial quadratic sieve (MPQS) algorithm, and discuss their parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of the 617decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
The security of Hidden Field Equations (HFE
 In The Cryptographer’s Track at RSA Conference 2001, volume 2020 of Lecture Notes in Computer Science
, 2001
"... Abstract. We consider the basic version of the asymmetric cryptosystem HFE from Eurocrypt 96. We propose a notion of nontrivial equations as a tentative to account for a large class of attacks on oneway functions. We found equations that give experimental evidence that basic HFE can be broken in e ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the basic version of the asymmetric cryptosystem HFE from Eurocrypt 96. We propose a notion of nontrivial equations as a tentative to account for a large class of attacks on oneway functions. We found equations that give experimental evidence that basic HFE can be broken in expected polynomial time for any constant degree d. It has been independently proven by Shamir and Kipnis [Crypto’99]. We designed and implemented a series of new advanced attacks that are much more efficient that the ShamirKipnis attack. They are practical for HFE degree d ≤ 24 and realistic up to d = 128. The 80bit, 500$ Patarin’s 1st challenge on HFE can be broken in about 2 62. Our attack is subexponential and requires n 3 2 log d computations. The original ShamirKipnis attack was in at least n log2 d. We show how to improve the ShamirKipnis attack, by using a better method of solving the involved algebraical problem MinRank. It becomes then in n 3 log d+O(1). All attacks fail for modified versions of HFE: HFE − (Asiacrypt’98), HFEv (Eurocrypt’99), Quartz (RSA’2000) and even for Flash (RSA’2000).
NFS with Four Large Primes: An Explosive Experiment
, 1995
"... The purpose of this paper is to report the unexpected results that we obtained while experimenting with the multilarge prime variation of the general number field sieve integer factoring algorithm (NFS, cf. [8]). For traditional factoring algorithms that make use of at most two large primes, the ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
The purpose of this paper is to report the unexpected results that we obtained while experimenting with the multilarge prime variation of the general number field sieve integer factoring algorithm (NFS, cf. [8]). For traditional factoring algorithms that make use of at most two large primes, the completion time can quite accurately be predicted by extrapolating an almost quartic and entirely ‘smooth ’ function that counts the number of useful combinations among the large primes [l]. For NFS such extrapolations seem to be impossiblethe number of useful combinations suddenly ‘explodes ’ in an as yet unpredictable way, that we have not yet been able to understand completely. The consequence of this explosion is that NFS is substantially faster than expected, which implies that factoring is somewhat easier than we thought.
A study of Coppersmith's block Wiedemann algorithm using matrix polynomials
 LMCIMAG, REPORT # 975 IM
, 1997
"... We analyse a randomized block algorithm proposed by Coppersmith for solving large sparse systems of linear equations, Aw = 0, over a finite field K =GF(q). It is a modification of an algorithm of Wiedemann. Coppersmith has given heuristic arguments to understand why the algorithm works. But it was a ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
(Show Context)
We analyse a randomized block algorithm proposed by Coppersmith for solving large sparse systems of linear equations, Aw = 0, over a finite field K =GF(q). It is a modification of an algorithm of Wiedemann. Coppersmith has given heuristic arguments to understand why the algorithm works. But it was an open question to prove that it may produce a solution, with positive probability, for small finite fields e.g. for K =GF(2). We answer this question nearly completely. The algorithm uses two random matrices X and Y of dimensions m \Theta N and N \Theta n. Over any finite field, we show how the parameters m and n of the algorithm may be tuned so that, for any input system, a solution is computed with high probability. Conversely, for certain particular input systems, we show that the conditions on the input parameters may be relaxed to ensure the success. We also improve the probability bound of Kaltofen in the case of large cardinality fields. Lastly, for the sake of completeness of the...
Improvements to the general number field sieve for discrete logarithms in prime fields
 Mathematics of Computation
, 2003
"... Abstract. In this paper, we describe many improvements to the number field sieve. Our main contribution consists of a new way to compute individual logarithms with the number field sieve without solving a very large linear system for each logarithm. We show that, with these improvements, the number ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
Abstract. In this paper, we describe many improvements to the number field sieve. Our main contribution consists of a new way to compute individual logarithms with the number field sieve without solving a very large linear system for each logarithm. We show that, with these improvements, the number field sieve outperforms the gaussian integer method in the hundred digit range. We also illustrate our results by successfully computing discrete logarithms with GNFS in a large prime field. 1.
Recent progress and prospects for integer factorisation algorithms
 In Proc. of COCOON 2000
, 2000
"... Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In recent years the limits of the best integer factorisation algorithms have been extended greatly, due in part to Moore’s law and in part to algorithmic improvements. It is now routine to factor 100decimal digit numbers, and feasible to factor numbers of 155 decimal digits (512 bits). We outline several integer factorisation algorithms, consider their suitability for implementation on parallel machines, and give examples of their current capabilities. In particular, we consider the problem of parallel solution of the large, sparse linear systems which arise with the MPQS and NFS methods. 1
A kilobit special number field sieve factorization
 Asiacrypt 2007, volume 4833 of LNCS
, 2007
"... Abstract. We describe how we reached a new factoring milestone by completing the first special number field sieve factorization of a number having more than 1024 bits, namely the Mersenne number 21039 − 1. Although this factorization is orders of magnitude ‘easier ’ than a factorization of a 1024bi ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We describe how we reached a new factoring milestone by completing the first special number field sieve factorization of a number having more than 1024 bits, namely the Mersenne number 21039 − 1. Although this factorization is orders of magnitude ‘easier ’ than a factorization of a 1024bit RSA modulus is believed to be, the methods we used to obtain our result shed new light on the feasibility of the latter computation. 1
An Efficient MaximumLikelihood Decoding of LDPC Codes Over the Binary Erasure Channel
 IEEE Trans. Inform. Theory
, 2004
"... Abstract — We propose an efficient maximum likelihood decoding algorithm for decoding lowdensity paritycheck codes over the binary erasure channel. We also analyze the computational complexity of the proposed algorithm. Index Terms — Lowdensity paritycheck (LDPC) codes, Binary erasure channel (B ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We propose an efficient maximum likelihood decoding algorithm for decoding lowdensity paritycheck codes over the binary erasure channel. We also analyze the computational complexity of the proposed algorithm. Index Terms — Lowdensity paritycheck (LDPC) codes, Binary erasure channel (BEC), Iterative decoding, Maximum likelihood (ML) decoding. I.