Results 1  10
of
55
A new algorithm for finding minimumweight words in a linear code: application to primitive narrowsense BCH codes of length 511
, 1998
"... : An algorithm for finding smallweight words in large linear codes is developed. It is in particular able to decode random [512,256,57]linear codes in 9 hours on a DEC alpha computer. We determine with it the minimum distance of some binary BCH codes of length 511, which were not known. Keywords ..."
Abstract

Cited by 91 (2 self)
 Add to MetaCart
(Show Context)
: An algorithm for finding smallweight words in large linear codes is developed. It is in particular able to decode random [512,256,57]linear codes in 9 hours on a DEC alpha computer. We determine with it the minimum distance of some binary BCH codes of length 511, which were not known. Keywords: errorcorrecting codes, decoding algorithm, minimum weight, random linear codes, BCH codes. (R'esum'e : tsvp) submitted to IEEE Transactions on Information Theory Also with ' Ecole Nationale Sup'erieure de Techniques Avanc'ees, laboratoire LEI, 32 boulevard Victor, F75015 Paris. Laboratoire d'Informatique de l'Ecole Normale Sup'erieure, 45 rue d'Ulm, 75230 Paris Cedex 05 Unite de recherche INRIA Rocquencourt Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY Cedex (France) Telephone : (33 1) 39 63 55 11  Telecopie : (33 1) 39 63 53 Un nouvel algorithme pour trouver des mots de poids minimum dans un code lin'eaire : application aux codes BCH primitifs au sens strict de l...
A New Identification Scheme Based on Syndrome Decoding
, 1994
"... Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. ..."
Abstract

Cited by 67 (8 self)
 Add to MetaCart
Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. Still, as is almost always the case in publickey cryptography, the FiatShamir scheme relied on arithmetic operations on large numbers. In 1989, there were two attempts to build identification protocols that only use simple operations (see [11, 10]). One appeared in the EUROCRYPT proceedings and relies on the intractability of some coding problems, the other was presented at the CRYPTO rump session and depends on the socalled Permuted Kernel problem (PKP). Unfortunately, the first of the schemes was not really practical. In the present paper, we propose a new identification scheme, based on errorcorrecting codes, which is zeroknowledge and is of practical value. Furthermore, we describe several variants, including one which has an identity based character. The security of our scheme depends on the hardness of decoding a word of given syndrome w.r.t. some binary linear errorcorrecting code.
Attacking and defending the McEliece cryptosystem
 31–46 in (Buchmann and Ding 2008). URL: http://cr.yp.to/papers.html#mceliece. Citations in this document
"... Abstract. This paper presents several improvements to Stern’s attack on the McEliece cryptosystem and achieves results considerably better than Canteaut et al. We show that the system with the originally proposed parameters can be broken on a moderate cluster in about a week. We have implemented our ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents several improvements to Stern’s attack on the McEliece cryptosystem and achieves results considerably better than Canteaut et al. We show that the system with the originally proposed parameters can be broken on a moderate cluster in about a week. We have implemented our attack and are carrying it out now. This paper proposes new parameters for the McEliece and Niederreiter cryptosystems achieving standard levels of security against all known attacks. The new parameters take account of our improved attack; the recent introduction of list decoding for binary Goppa codes; and the possibility of choosing code lengths that are not a power of 2. We achieve considerably smaller public key sizes than previous parameter choices for the same level of security.
Improved Fast Correlation Attacks on Stream Ciphers via Convolutional Codes
, 1999
"... This paper describes new methods for fast correlation attacks, based on the theory of convolutional codes. They can be applied to arbitrary LFSR feedback polynomials, in opposite to the previous methods, which mainly focus on feedback polynomials of low weight. The results improve significantly the ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
This paper describes new methods for fast correlation attacks, based on the theory of convolutional codes. They can be applied to arbitrary LFSR feedback polynomials, in opposite to the previous methods, which mainly focus on feedback polynomials of low weight. The results improve significantly the few previous results for this general case, and are in many cases comparable with corresponding results for low weight feedback polynomials.
Security Bounds for the Design of CodeBased Cryptosystems
, 2009
"... Codebased cryptography is often viewed as an interesting “PostQuantum” alternative to the classical number theory cryptography. Unlike many other such alternatives, it has the convenient advantage of having only a few, well identified, attack algorithms. However, improvements to these algorithms h ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
Codebased cryptography is often viewed as an interesting “PostQuantum” alternative to the classical number theory cryptography. Unlike many other such alternatives, it has the convenient advantage of having only a few, well identified, attack algorithms. However, improvements to these algorithms have made their effective complexity quite complex to compute. We give here some lower bounds on the work factor of idealized versions of these algorithms, taking into account all possible tweaks which could improve their practical complexity. The aim of this article is to help designers select durably secure parameters.
On the Security of Some Cryptosystems Based on ErrorCorrecting Codes
, 1994
"... . A certain number of publickey cryptosystems based on errorcorrecting codes have been proposed as an alternative to algorithms based on number theory. In this paper, we analyze algorithms that can be used to attack such cryptosystems in a very precise way, and optimize them. Thus, we obtain some m ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
. A certain number of publickey cryptosystems based on errorcorrecting codes have been proposed as an alternative to algorithms based on number theory. In this paper, we analyze algorithms that can be used to attack such cryptosystems in a very precise way, and optimize them. Thus, we obtain some more efficient attacks than those previously known. Even if they remain unfeasible, they indicate the cryptosystems parameters forbidden by the existence of these algorithms. 1 Introduction 1.1 An NPcomplete problem It is known [BMT78] that the problem of finding a codeword of given weight in a linear binary code is NPcomplete. This property can be used to build cryptosystems or identification systems. But, as for other NPcomplete problems, some cases of this problem can be solved by probabilistic algorithms. This means that cryptographic systems such as the following ones must take into account the performances of these algorithms. 1.2 The McEliece public key cryptosystem Presentation T...
Fast Correlation Attacks based on Turbo Code Techniques
 In Advances in Cryptology  CRYPTO’99, number 1666 in Lecture Notes in Computer Science
, 1999
"... This paper describes new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes. The proposed algorithm consists of two parts, a preprocessing part and a decoding part. The preprocessing part identifies several par ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
This paper describes new methods for fast correlation attacks on stream ciphers, based on techniques used for constructing and decoding the by now famous turbo codes. The proposed algorithm consists of two parts, a preprocessing part and a decoding part. The preprocessing part identifies several parallel convolutional codes, embedded in the code generated by the LFSR, all sharing the same information bits. The decoding part then finds the correct information bits through an iterative decoding procedure. This provides the initial state of the LFSR.
A Distinguisher for High Rate McEliece Cryptosystems
"... Abstract. The purpose of this paper is to study the difficulty of the socalled Goppa Code Distinguishing (GD) problem introduced by Courtois, Finiasz and Sendrier in Asiacrypt 2001. GD is the problem of distinguishing the public matrix in the McEliece cryptosystem from a random matrix. It is widely ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The purpose of this paper is to study the difficulty of the socalled Goppa Code Distinguishing (GD) problem introduced by Courtois, Finiasz and Sendrier in Asiacrypt 2001. GD is the problem of distinguishing the public matrix in the McEliece cryptosystem from a random matrix. It is widely believed that this problem is computationally hard as proved by the increasing number of papers using this hardness assumption. To our point of view, disproving/mitigating this hardness assumption is a breakthrough in codebased cryptography and may open a new direction to attack McEliece cryptosystems. In this paper, we present an efficient distinguisher for alternant and Goppa codes of high rate over binary/non binary fields. Our distinguisher is based on a recent algebraic attack against compact variants of McEliece which reduces the keyrecovery to the problem of solving an algebraic system of equations. We exploit a defect of rank in the (linear) system obtained by linearizing this algebraic system. It turns out that our distinguisher is highly discriminant. Indeed, we are able to precisely quantify the defect of rank for “generic ” binary and nonbinary random, alternant and Goppa codes. We have verified these formulas with practical experiments, and a theoretical explanation for such defect of rank is also provided. We believe that this work permits to shed some light on the choice of secure parameters
Analysis of stepreduced SHA256
 FSE 2006, LNCS 4047
, 2006
"... www.iaik.tugraz.at/research/krypto Abstract. This is the first article analyzing the security of SHA256 against fast collision search which considers the recent attacks by Wang et al. We show the limits of applying techniques known so far to SHA256. Next we introduce a new type of perturbation vec ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
www.iaik.tugraz.at/research/krypto Abstract. This is the first article analyzing the security of SHA256 against fast collision search which considers the recent attacks by Wang et al. We show the limits of applying techniques known so far to SHA256. Next we introduce a new type of perturbation vector which circumvents the identified limits. This new technique is then applied to the unmodified SHA256. Exploiting the combination of Boolean functions and modular addition together with the newly developed technique allows us to derive collisionproducing characteristics for stepreduced SHA256, which was not possible before. Although our results do not threaten the security of SHA256, we show that the low probability of a single local collision may give rise to a false sense of security. 1
A New Paradigm for Public Key Identification
 IEEE TRANSACTIONS ON INFORMATION THEORY
"... The present article investigates the possibility of designing zeroknowledge identification schemes based on hard problems from coding theory. Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([16]). Their practical significance was soon demonstrated in ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
The present article investigates the possibility of designing zeroknowledge identification schemes based on hard problems from coding theory. Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([16]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([11]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. In the present paper, we propose a new identification scheme, based on errorcorrecting codes, which is zeroknowledge and seems of practical value. Furthermore