Results 1  10
of
809
Coding for errors and erasures in random network coding
, 2007
"... The problem of errorcontrol in random network coding is considered. A “noncoherent” or “channel oblivious ” model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that random network coding is vectorspa ..."
Abstract

Cited by 263 (14 self)
 Add to MetaCart
The problem of errorcontrol in random network coding is considered. A “noncoherent” or “channel oblivious ” model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that random network coding is vectorspace preserving, information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. We introduce a metric on the space of all subspaces of a fixed vector space, and show that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space V ∩ U is large enough. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finitefield Grassmannian. Spherepacking and spherecovering bounds as well as generalization of the Singleton bound are provided for such codes. Finally, a ReedSolomonlike code construction, related to Gabidulin’s construction of maximum rankdistance codes, is provided.
The Z_4linearity of Kerdock, Preparata, Goethals, and related codes
, 2001
"... Certain notorious nonlinear binary codes contain more codewords than any known linear code. These include the codes constructed by NordstromRobinson, Kerdock, Preparata, Goethals, and DelsarteGoethals. It is shown here that all these codes can be very simply constructed as binary images under the ..."
Abstract

Cited by 177 (15 self)
 Add to MetaCart
(Show Context)
Certain notorious nonlinear binary codes contain more codewords than any known linear code. These include the codes constructed by NordstromRobinson, Kerdock, Preparata, Goethals, and DelsarteGoethals. It is shown here that all these codes can be very simply constructed as binary images under the Gray map of linear codes over ¡ 4, the integers mod 4 (although this requires a slight modification of the Preparata and Goethals codes). The construction implies that all these binary codes are distance invariant. Duality in the ¡ 4 domain implies that the binary images have dual weight distributions. The Kerdock and ‘Preparata ’ codes are duals over ¡ 4 — and the NordstromRobinson code is selfdual — which explains why their weight distributions are dual to each other. The Kerdock and ‘Preparata ’ codes are ¡ 4analogues of firstorder ReedMuller and extended Hamming codes, respectively. All these codes are extended cyclic codes over ¡ 4, which greatly simplifies encoding and decoding. An algebraic harddecision decoding algorithm is given for the ‘Preparata ’ code and a Hadamardtransform softdecision decoding algorithm for the Kerdock code. Binary first and secondorder ReedMuller codes are also linear over ¡ 4, but extended Hamming codes of length n ≥ 32 and the
A rankmetric approach to error control in random network coding
 IEEE Transactions on Information Theory
"... It is shown that the error control problem in random network coding can be reformulated as a generalized decoding problem for rankmetric codes. This result allows many of the tools developed for rankmetric codes to be applied to random network coding. In the generalized decoding problem induced by ..."
Abstract

Cited by 167 (12 self)
 Add to MetaCart
It is shown that the error control problem in random network coding can be reformulated as a generalized decoding problem for rankmetric codes. This result allows many of the tools developed for rankmetric codes to be applied to random network coding. In the generalized decoding problem induced by random network coding, the channel may supply partial information about the error in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can fully exploit the correction capability of the code; namely, it can correct any pattern of ǫ errors, µ erasures and δ deviations provided 2ǫ+ µ + δ ≤ d − 1, where d is the minimum rank distance of the code. Our approach is based on the coding theory for subspaces introduced by Koetter and Kschischang and can be seen as a practical way to construct codes in that context. I.
Constructive and destructive facets of Weil descent on elliptic curves
 JOURNAL OF CRYPTOLOGY
, 2002
"... ..."
(Show Context)
A taxonomy of pairingfriendly elliptic curves
, 2006
"... Elliptic curves with small embedding degree and large primeorder subgroup are key ingredients for implementing pairingbased cryptographic systems. Such “pairingfriendly” curves are rare and thus require specific constructions. In this paper we give a single coherent framework that encompasses all ..."
Abstract

Cited by 110 (11 self)
 Add to MetaCart
(Show Context)
Elliptic curves with small embedding degree and large primeorder subgroup are key ingredients for implementing pairingbased cryptographic systems. Such “pairingfriendly” curves are rare and thus require specific constructions. In this paper we give a single coherent framework that encompasses all of the constructions of pairingfriendly elliptic curves currently existing in the literature. We also include new constructions of pairingfriendly curves that improve on the previously known constructions for certain embedding degrees. Finally, for all embedding degrees up to 50, we provide recommendations as to which pairingfriendly curves to choose to best satisfy a variety of performance and security requirements.
Computing with Very Weak Random Sources
, 1994
"... For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulatio ..."
Abstract

Cited by 86 (7 self)
 Add to MetaCart
For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulations take time no(‘og(k)n) (log(k) is the logarithm iterated k times). We also gave a polynomialtime BPP simulation using ChorGoldreich sources of minentropy Ro(’), which is optimal. We present applications to timespace tradeoffs, expander constructions, and the hardness of approximation. Also of interest is our randomnessefficient Leflover Hash Lemma, found independently by Goldreich & Wigderson.
Subquadratictime factoring of polynomials over finite fields
 Math. Comp
, 1998
"... Abstract. New probabilistic algorithms are presented for factoring univariate polynomials over finite fields. The algorithms factor a polynomial of degree n over a finite field of constant cardinality in time O(n 1.815). Previous algorithms required time Θ(n 2+o(1)). The new algorithms rely on fast ..."
Abstract

Cited by 79 (11 self)
 Add to MetaCart
(Show Context)
Abstract. New probabilistic algorithms are presented for factoring univariate polynomials over finite fields. The algorithms factor a polynomial of degree n over a finite field of constant cardinality in time O(n 1.815). Previous algorithms required time Θ(n 2+o(1)). The new algorithms rely on fast matrix multiplication techniques. More generally, to factor a polynomial of degree n over the finite field Fq with q elements, the algorithms use O(n 1.815 log q) arithmetic operations in Fq. The new “baby step/giant step ” techniques used in our algorithms also yield new fast practical algorithms at superquadratic asymptotic running time, and subquadratictime methods for manipulating normal bases of finite fields. 1.
Reliability mechanisms for very large storage systems
 IN PROCEEDINGS OF THE 20TH IEEE / 11TH NASA GODDARD CONFERENCE ON MASS STORAGE SYSTEMS AND TECHNOLOGIES
, 2003
"... Reliability and availability are increasingly important in largescale storage systems built from thousands of individual storage devices. Large systems must survive the failure of individual components; in systems with thousands of disks, even infrequent failures are likely in some device. We focus ..."
Abstract

Cited by 78 (21 self)
 Add to MetaCart
(Show Context)
Reliability and availability are increasingly important in largescale storage systems built from thousands of individual storage devices. Large systems must survive the failure of individual components; in systems with thousands of disks, even infrequent failures are likely in some device. We focus on two types of errors: nonrecoverable read errors and drive failures. We discuss mechanisms for detecting and recovering from such errors, introducing improved techniques for detecting errors in disk reads and fast recovery from disk failure. We show that simple RAID cannot guarantee sufficient reliability; our analysis examines the tradeoffs among other schemes between system availability and storage efficiency. Based on our data, we believe that twoway mirroring should be sufficient for most large storage systems. For those that need very high reliabilty, we recommend either threeway mirroring or mirroring combined with RAID.
Towards 3Query Locally Decodable Codes of Subexponential Length
, 2008
"... A qquery Locally Decodable Code (LDC) encodes an nbit message x as an Nbit codeword C(x), such that one can probabilistically recover any bit xi of the message by querying only q bits of the codeword C(x), even after some constant fraction of codeword bits has been corrupted. We give new const ..."
Abstract

Cited by 75 (7 self)
 Add to MetaCart
(Show Context)
A qquery Locally Decodable Code (LDC) encodes an nbit message x as an Nbit codeword C(x), such that one can probabilistically recover any bit xi of the message by querying only q bits of the codeword C(x), even after some constant fraction of codeword bits has been corrupted. We give new constructions of three query LDCs of vastly shorter length than that of previous constructions. Specifically, given any Mersenne prime p = 2t −1, we design three query LDCs of length N = exp(O(n1/t)), for every n. Based on the largest known Mersenne prime, this translates to a length of less than exp(O(n10−7)), compared to exp(O(n1/2)) in the previous constructions. It has often been conjectured that there are infinitely many Mersenne primes. Under this conjecture, our constructions yield three query locally decodable codes of length N = exp(nO ( 1log log n)) for infinitely many n. We also obtain analogous improvements for Private Information Retrieval (PIR) schemes. We give 3server PIR schemes with communication complexity of O(n10−7) to access an nbit database, compared to the previous best scheme with complexity O(n1/5.25). Assuming again that there are infinitely many Mersenne primes, we get 3server PIR schemes of communication complexity n O ( 1log log n) for infinitely many n. Previous families of LDCs and PIR schemes were based on the properties of lowdegree multivariate polynomials over finite fields. Our constructions are completely different and are obtained by constructing a large number of vectors in a small dimensional vector space whose inner products are restricted to lie in an algebraically nice set.
Efficient Decoding of ReedSolomon Codes Beyond Half the Minimum Distance
 IEEE Transactions on Information Theory
, 2000
"... A list decoding algorithm is presented for [n; k] ReedSolomon (RS) codes over GF (q), which is capable of correcting more than b(n\Gammak)=2c errors. Based on a previous work of Sudan, an extended key equation (EKE) is derived for RS codes, which reduces to the classical key equation when the numbe ..."
Abstract

Cited by 71 (0 self)
 Add to MetaCart
A list decoding algorithm is presented for [n; k] ReedSolomon (RS) codes over GF (q), which is capable of correcting more than b(n\Gammak)=2c errors. Based on a previous work of Sudan, an extended key equation (EKE) is derived for RS codes, which reduces to the classical key equation when the number of errors is limited to b(n\Gammak)=2c. Generalizing Massey's algorithm that finds the shortest recurrence that generates a given sequence, an algorithm is obtained for solving the EKE in time complexity O(` \Delta (n\Gammak) 2 ), where ` is a design parameter, typically a small constant, which is an upper bound on the size of the list of decoded codewords (the case ` = 1 corresponds to classical decoding of up to b(n\Gammak)=2c errors where the decoding ends with at most one codeword). This improves on the time complexity O(n 3 ) needed for solving the equations of Sudan's algorithm by a naive Gaussian elimination. The polynomials found by solving the EKE are then used for reconstruct...