Results 1  10
of
414
Constructive And Destructive Facets Of Weil Descent On Elliptic Curves
 JOURNAL OF CRYPTOLOGY
, 2000
"... In this paper we look in detail at the curves which arise in the method of Galbraith and Smart for producing curves in the Weil restriction of an elliptic curve over a finite field of characteristic two of composite degree. We explain how this method can be used to construct hyperelliptic cryptosys ..."
Abstract

Cited by 139 (12 self)
 Add to MetaCart
In this paper we look in detail at the curves which arise in the method of Galbraith and Smart for producing curves in the Weil restriction of an elliptic curve over a finite field of characteristic two of composite degree. We explain how this method can be used to construct hyperelliptic cryptosystems which could be as secure as a cryptosystems based on the original elliptic curve. On the other hand, we show that this may provide a way of attacking the original elliptic curve cryptosystem using recent advances in the study of the discrete logarithm problem on hyperelliptic curves. We examine the resulting higher genus curves in some detail and propose an additional check on elliptic curve systems defined over fields of characteristic two so as to make them immune from the methods in this paper. 1. Introduction In this paper we address two problems: How to construct hyperelliptic cryptosystems and how to attack elliptic curve cryptosystems defined over fields of even characteristic ...
The Z_4linearity of Kerdock, Preparata, Goethals, and related codes
, 2001
"... Certain notorious nonlinear binary codes contain more codewords than any known linear code. These include the codes constructed by NordstromRobinson, Kerdock, Preparata, Goethals, and DelsarteGoethals. It is shown here that all these codes can be very simply constructed as binary images under the ..."
Abstract

Cited by 107 (15 self)
 Add to MetaCart
Certain notorious nonlinear binary codes contain more codewords than any known linear code. These include the codes constructed by NordstromRobinson, Kerdock, Preparata, Goethals, and DelsarteGoethals. It is shown here that all these codes can be very simply constructed as binary images under the Gray map of linear codes over ¡ 4, the integers mod 4 (although this requires a slight modification of the Preparata and Goethals codes). The construction implies that all these binary codes are distance invariant. Duality in the ¡ 4 domain implies that the binary images have dual weight distributions. The Kerdock and ‘Preparata ’ codes are duals over ¡ 4 — and the NordstromRobinson code is selfdual — which explains why their weight distributions are dual to each other. The Kerdock and ‘Preparata ’ codes are ¡ 4analogues of firstorder ReedMuller and extended Hamming codes, respectively. All these codes are extended cyclic codes over ¡ 4, which greatly simplifies encoding and decoding. An algebraic harddecision decoding algorithm is given for the ‘Preparata ’ code and a Hadamardtransform softdecision decoding algorithm for the Kerdock code. Binary first and secondorder ReedMuller codes are also linear over ¡ 4, but extended Hamming codes of length n ≥ 32 and the
Coding for errors and erasures in random network coding
 in Proc. IEEE Int. Symp. Information Theory
, 2007
"... Abstract — The problem of errorcontrol in a “noncoherent” random network coding channel is considered. Information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A suitable coding metric ..."
Abstract

Cited by 97 (12 self)
 Add to MetaCart
Abstract — The problem of errorcontrol in a “noncoherent” random network coding channel is considered. Information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. A suitable coding metric on subspaces is defined, under which a minimum distance decoder achieves correct decoding if the dimension of the space V ∩ U is large enough. When the dimension of each codeword is restricted to a fixed integer, the code forms a subset of the vertices of the Grassmann graph. Spherepacking, spherecovering bounds and a Singleton bound are provided for such codes. A ReedSolomonlike code construction is provided and a decoding algorithm given. I.
A taxonomy of pairingfriendly elliptic curves
, 2006
"... Elliptic curves with small embedding degree and large primeorder subgroup are key ingredients for implementing pairingbased cryptographic systems. Such “pairingfriendly” curves are rare and thus require specific constructions. In this paper we give a single coherent framework that encompasses all ..."
Abstract

Cited by 78 (10 self)
 Add to MetaCart
Elliptic curves with small embedding degree and large primeorder subgroup are key ingredients for implementing pairingbased cryptographic systems. Such “pairingfriendly” curves are rare and thus require specific constructions. In this paper we give a single coherent framework that encompasses all of the constructions of pairingfriendly elliptic curves currently existing in the literature. We also include new constructions of pairingfriendly curves that improve on the previously known constructions for certain embedding degrees. Finally, for all embedding degrees up to 50, we provide recommendations as to which pairingfriendly curves to choose to best satisfy a variety of performance and security requirements.
Computing with Very Weak Random Sources
, 1994
"... For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulations tak ..."
Abstract

Cited by 73 (7 self)
 Add to MetaCart
For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulations take time no(‘og(k)n) (log(k) is the logarithm iterated k times). We also gave a polynomialtime BPP simulation using ChorGoldreich sources of minentropy Ro(’), which is optimal. We present applications to timespace tradeoffs, expander constructions, and the hardness of approximation. Also of interest is our randomnessefficient Leflover Hash Lemma, found independently by Goldreich & Wigderson.
Subquadratictime factoring of polynomials over finite fields
 Math. Comp
, 1998
"... Abstract. New probabilistic algorithms are presented for factoring univariate polynomials over finite fields. The algorithms factor a polynomial of degree n over a finite field of constant cardinality in time O(n 1.815). Previous algorithms required time Θ(n 2+o(1)). The new algorithms rely on fast ..."
Abstract

Cited by 68 (11 self)
 Add to MetaCart
Abstract. New probabilistic algorithms are presented for factoring univariate polynomials over finite fields. The algorithms factor a polynomial of degree n over a finite field of constant cardinality in time O(n 1.815). Previous algorithms required time Θ(n 2+o(1)). The new algorithms rely on fast matrix multiplication techniques. More generally, to factor a polynomial of degree n over the finite field Fq with q elements, the algorithms use O(n 1.815 log q) arithmetic operations in Fq. The new “baby step/giant step ” techniques used in our algorithms also yield new fast practical algorithms at superquadratic asymptotic running time, and subquadratictime methods for manipulating normal bases of finite fields. 1.
A New Polynomial Factorization Algorithm and its Implementation
 Journal of Symbolic Computation
, 1996
"... We consider the problem of factoring univariate polynomials over a finite field. We demonstrate that the new baby step/giant step factoring method, recently developed by Kaltofen & Shoup, can be made into a very practical algorithm. We describe an implementation of this algorithm, and present the re ..."
Abstract

Cited by 64 (5 self)
 Add to MetaCart
We consider the problem of factoring univariate polynomials over a finite field. We demonstrate that the new baby step/giant step factoring method, recently developed by Kaltofen & Shoup, can be made into a very practical algorithm. We describe an implementation of this algorithm, and present the results of empirical tests comparing this new algorithm with others. When factoring polynomials modulo large primes, the algorithm allows much larger polynomials to be factored using a reasonable amount of time and space than was previously possible. For example, this new software has been used to factor a "generic" polynomial of degree 2048 modulo a 2048bit prime in under 12 days on a Sun SPARCstation 10, using 68 MB of main memory. 1 Introduction We consider the problem of factoring a univariate polynomial of degree n over the field F p of p elements, where p is prime. This problem has been wellstudied, and many algorithms for its solution have been proposed. In general, the running tim...
Reliability mechanisms for very large storage systems
 IN PROCEEDINGS OF THE 20TH IEEE / 11TH NASA GODDARD CONFERENCE ON MASS STORAGE SYSTEMS AND TECHNOLOGIES
, 2003
"... Reliability and availability are increasingly important in largescale storage systems built from thousands of individual storage devices. Large systems must survive the failure of individual components; in systems with thousands of disks, even infrequent failures are likely in some device. We focus ..."
Abstract

Cited by 58 (19 self)
 Add to MetaCart
Reliability and availability are increasingly important in largescale storage systems built from thousands of individual storage devices. Large systems must survive the failure of individual components; in systems with thousands of disks, even infrequent failures are likely in some device. We focus on two types of errors: nonrecoverable read errors and drive failures. We discuss mechanisms for detecting and recovering from such errors, introducing improved techniques for detecting errors in disk reads and fast recovery from disk failure. We show that simple RAID cannot guarantee sufficient reliability; our analysis examines the tradeoffs among other schemes between system availability and storage efficiency. Based on our data, we believe that twoway mirroring should be sufficient for most large storage systems. For those that need very high reliabilty, we recommend either threeway mirroring or mirroring combined with RAID.
A rankmetric approach to error control in random network coding
 IEEE Transactions on Information Theory
"... It is shown that the error control problem in random network coding can be reformulated as a generalized decoding problem for rankmetric codes. This result allows many of the tools developed for rankmetric codes to be applied to random network coding. In the generalized decoding problem induced by ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
It is shown that the error control problem in random network coding can be reformulated as a generalized decoding problem for rankmetric codes. This result allows many of the tools developed for rankmetric codes to be applied to random network coding. In the generalized decoding problem induced by random network coding, the channel may supply partial information about the error in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can fully exploit the correction capability of the code; namely, it can correct any pattern of ǫ errors, µ erasures and δ deviations provided 2ǫ+ µ + δ ≤ d − 1, where d is the minimum rank distance of the code. Our approach is based on the coding theory for subspaces introduced by Koetter and Kschischang and can be seen as a practical way to construct codes in that context. I.
Fast Construction of Irreducible Polynomials over Finite Fields
 J. Symbolic Comput
, 1993
"... The main result of this paper is a new algorithm for constructing an irreducible polynomial of specified degree n over a finite field F q . The algorithm is probabilistic, and is asymptotically faster than previously known algorithms for this problem. It uses an expected number of O~(n 2 + n log q) ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
The main result of this paper is a new algorithm for constructing an irreducible polynomial of specified degree n over a finite field F q . The algorithm is probabilistic, and is asymptotically faster than previously known algorithms for this problem. It uses an expected number of O~(n 2 + n log q) operations in F q , where the "softO" O~ indicates an implicit factor of (log n) O(1) . In addition, two new polynomial irreducibility tests are described. 1 Introduction 1.1 Statement of main result Let F q be a finite field with q elements, where q is a primepower. A theorem due to Moore (1893) states that for every positive integer n, there exists a field extension F q n , unique up to isomorphism, with q n elements. Such extensions play an important role in coding theory (implementing error correcting codes), cryptography (implementing cryptosystems), and complexity theory (amplifying randomness). In this paper, we consider the algorithmic version of Moore's theorem: how to ...