Results 1  10
of
33
Algebraic Cryptanalysis of McEliece Variants with Compact Keys
 In Proceedings of Eurocrypt 2010
"... Abstract. In this paper we propose a new approach to investigate the security of the McEliece cryptosystem. We recall that this cryptosystem relies on the use of errorcorrecting codes. Since its invention thirty years ago, no efficient attack had been devised that managed to recover the private key ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Abstract. In this paper we propose a new approach to investigate the security of the McEliece cryptosystem. We recall that this cryptosystem relies on the use of errorcorrecting codes. Since its invention thirty years ago, no efficient attack had been devised that managed to recover the private key. We prove that the private key of the cryptosystem satisfies a system of bihomogeneous polynomial equations. This property is due to the particular class of codes considered which are alternant codes. We have used these highly structured algebraic equations to mount an efficient keyrecovery attack against two recent variants of the McEliece cryptosystems that aim at reducing public key sizes. These two compact variants of McEliece managed to propose keys with less than 20,000 bits. To do so, they proposed to use quasicyclic or dyadic structures. An implementation of our algebraic attack in the computer algebra system MAGMA allows to find the secretkey in a negligible time (less than one second) for almost all the proposed challenges. For instance, a private key designed for a 256bit security has been found in 0.06 seconds with about 2 17.8 operations. 1
A Distinguisher for High Rate McEliece Cryptosystems
"... Abstract. The purpose of this paper is to study the difficulty of the socalled Goppa Code Distinguishing (GD) problem introduced by Courtois, Finiasz and Sendrier in Asiacrypt 2001. GD is the problem of distinguishing the public matrix in the McEliece cryptosystem from a random matrix. It is widely ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Abstract. The purpose of this paper is to study the difficulty of the socalled Goppa Code Distinguishing (GD) problem introduced by Courtois, Finiasz and Sendrier in Asiacrypt 2001. GD is the problem of distinguishing the public matrix in the McEliece cryptosystem from a random matrix. It is widely believed that this problem is computationally hard as proved by the increasing number of papers using this hardness assumption. To our point of view, disproving/mitigating this hardness assumption is a breakthrough in codebased cryptography and may open a new direction to attack McEliece cryptosystems. In this paper, we present an efficient distinguisher for alternant and Goppa codes of high rate over binary/non binary fields. Our distinguisher is based on a recent algebraic attack against compact variants of McEliece which reduces the keyrecovery to the problem of solving an algebraic system of equations. We exploit a defect of rank in the (linear) system obtained by linearizing this algebraic system. It turns out that our distinguisher is highly discriminant. Indeed, we are able to precisely quantify the defect of rank for “generic ” binary and nonbinary random, alternant and Goppa codes. We have verified these formulas with practical experiments, and a theoretical explanation for such defect of rank is also provided. We believe that this work permits to shed some light on the choice of secure parameters
A.: ZeroSum Distinguishers for Iterated Permutations and Application to Keccakf and Hamsi256
 Selected Areas in Cryptography. Lecture Notes in Computer Science
, 2010
"... Abstract. The zerosum distinguishers introduced by Aumasson and Meier are investigated. First, the minimal size of a zerosum is established. Then, we analyze the impacts of the linear and the nonlinear layers in an iterated permutation on the construction of zerosum partitions. Finally, these tec ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. The zerosum distinguishers introduced by Aumasson and Meier are investigated. First, the minimal size of a zerosum is established. Then, we analyze the impacts of the linear and the nonlinear layers in an iterated permutation on the construction of zerosum partitions. Finally, these techniques are applied to the Keccakf permutation and to Hamsi256. We exhibit several zerosum partitions for 20 rounds (out of 24) of Keccakf and some zerosum partitions of size 2 19 and 2 10 for the finalization permutation in Hamsi256.
Informationset decoding for linear codes over Fq
 in PQCrypto 2010 [36] (2010), 81–94. URL: http://eprint.iacr.org/2009/589. Citations in this document
"... Abstract. The best known nonstructural attacks against codebased cryptosystems are based on informationset decoding. Stern’s algorithm and its improvements are well optimized and the complexity is reasonably well understood. However, these algorithms only handle codes over F2. This paper presents ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract. The best known nonstructural attacks against codebased cryptosystems are based on informationset decoding. Stern’s algorithm and its improvements are well optimized and the complexity is reasonably well understood. However, these algorithms only handle codes over F2. This paper presents a generalization of Stern’s informationsetdecoding algorithm for decoding linear codes over arbitrary finite fields Fq and analyzes the complexity. This result makes it possible to compute the security of recently proposed codebased systems over nonbinary fields. As an illustration, ranges of parameters for generalized McEliece cryptosystems using classical Goppa codes over F31 are suggested for which the new informationsetdecoding algorithm needs 2 128 bit operations.
MDPCMcEliece: New McEliece Variants from Moderate Density ParityCheck Codes
"... Abstract. Cryptography based on coding theory is believed to resist to quantum attacks (all cryptosystems based on factoring/discrete logarithm can be quantum attacked in polynomial time). The McEliece cryptosystem is the oldest codebased cryptosystem and its security relies on two problems: the in ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. Cryptography based on coding theory is believed to resist to quantum attacks (all cryptosystems based on factoring/discrete logarithm can be quantum attacked in polynomial time). The McEliece cryptosystem is the oldest codebased cryptosystem and its security relies on two problems: the indistinguishability of the code family and the hardness of decoding random linear codes. The former is usually the weakest one. The main drawback of this cryptosystem regards its huge publickeys. Recently, several attempts to reduce its keysize have been proposed. Almost all of them were successfully broken due to the additional algebraic structure used to reduce the keys. In this work, we propose McEliece variants from Moderate Density ParityCheck codes. These codes are LDPC codes of higher density than what is usually adopted for telecommunication solutions. We show that our proposal strongly strengthens the security against distinguishing attacks and also provides extremely compactkeys. Under a reasonable assumption, MDPC codes reduce the distinguishing problem to decoding a linear code and thus the security of our proposal relies only on a well studied codingtheory problem. Furthermore, using a quasicyclic structure, we provide the smallest publickeys for codebased cryptosystem. For 80bits of security, the publickey has only 4800 bits. In summary, this represents the most competitive codebased cryptosystem ever proposed and is a strong alternative for traditional cryptography.
Decoding random linear codes in Õ(20.054n
 Advances in Cryptology  ASIACRYPT 2011, volume 7073 of LNCS
, 2011
"... Abstract. Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all codebased cryptography.Thebestattacksonthemostprominentcodebasedcryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decodin ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all codebased cryptography.Thebestattacksonthemostprominentcodebasedcryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decoding algorithm for random linear codes of length n was for a long time Stern’s variant of informationset decoding running in time Õ ( 2 0.05563n). Recently, Bernstein, Lange and Peters proposed a new technique called Ballcollision decoding which offers a speedup over Stern’s algorithm by improving the running time to Õ ( 2 0.05558n). In this paper, we present a new algorithm for decoding linear codes that is inspired by a representation technique due to HowgraveGraham and Joux in the context of subset sum algorithms. Our decoding algorithm offers a rigorous complexity analysis for random linear codes and brings the time complexity down to Õ ( 2 0.05363n).
Decoding one out of many
 PQCrypto 2011. Volume 7071 of LNCS
, 2011
"... Abstract. Generic decoding of linear codes is the best known attack against most codebased cryptosystems. Understanding and measuring the complexity of the best decoding technique is thus necessary to select secure parameters. We consider here the possibility that an attacker has access to many cry ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. Generic decoding of linear codes is the best known attack against most codebased cryptosystems. Understanding and measuring the complexity of the best decoding technique is thus necessary to select secure parameters. We consider here the possibility that an attacker has access to many cryptograms and is satisfied by decrypting (i.e. decoding) only one of them. We show that, in many cases of interest in cryptology, a variant of Stern’s collision decoding can be adapted to gain a factor almost √ N when N instances are given. If the attacker has access to an unlimited number of instances, we show that the attack complexity is significantly lower, in fact raised by a power slightly larger than 2/3. Finally we give indications on how to counter those attacks. 1
ParallelCFS Strengthening the CFS McElieceBased Signature Scheme
"... Abstract. This article presents a modification of the CFS code based signature scheme. By producing two (or more generally i) signatures in parallel, we show that it is possible to protect this scheme from “one out of many ” decoding attacks. With this modification, and at the cost of slightly large ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. This article presents a modification of the CFS code based signature scheme. By producing two (or more generally i) signatures in parallel, we show that it is possible to protect this scheme from “one out of many ” decoding attacks. With this modification, and at the cost of slightly larger signatures, it is possible to use smaller parameters for the CFS signature, thus making this new ParallelCFS construction more practical than standard CFS signatures.
Faster 2regular informationset decoding
"... Abstract. Fix positive integers B and w. Let C be a linear code over F2 of length Bw. The 2regulardecoding problem is to find a nonzero codeword consisting of w lengthB blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndromebased) hash function a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Fix positive integers B and w. Let C be a linear code over F2 of length Bw. The 2regulardecoding problem is to find a nonzero codeword consisting of w lengthB blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndromebased) hash function and related proposals. This problem differs from the usual informationsetdecoding problems in that (1) the target codeword is required to have a very regular structure and (2) the target weight can be rather high, so that there are many possible codewords of that weight. Augot, Finiasz, and Sendrier, in the paper that introduced FSB, presented a variant of informationset decoding tuned for 2regular decoding. This paper improves the Augot–Finiasz–Sendrier algorithm in a way that is analogous to Stern’s improvement upon basic informationset decoding. The resulting algorithm achieves an exponential speedup over the previous algorithm. Keywords: Informationset decoding, 2regular decoding, FSB, binary codes.
Wild McEliece
"... Abstract. The original McEliece cryptosystem uses lengthn codes over F2 with dimension ≥ n−mt efficiently correcting t errors where 2 m ≥ n. This paper presents a generalized cryptosystem that uses lengthn codes over small finite fields Fq with dimension ≥ n − m(q − 1)t efficiently correcting ⌊qt/ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. The original McEliece cryptosystem uses lengthn codes over F2 with dimension ≥ n−mt efficiently correcting t errors where 2 m ≥ n. This paper presents a generalized cryptosystem that uses lengthn codes over small finite fields Fq with dimension ≥ n − m(q − 1)t efficiently correcting ⌊qt/2 ⌋ errors where q m ≥ n. Previously proposed cryptosystems with the same length and dimension corrected only ⌊(q − 1)t/2⌋ errors for q ≥ 3. This paper also presents listdecoding algorithms that efficiently correct even more errors for the same codes over Fq. Finally, this paper shows that the increase from ⌊(q − 1)t/2 ⌋ errors to more than ⌊qt/2 ⌋ errors allows considerably smaller keys to achieve the same security level against all known attacks.