Results 11  20
of
332
Lowdensity paritycheck codes based on finite geometries: A rediscovery and new results
 IEEE Trans. Inform. Theory
, 2001
"... This paper presents a geometric approach to the construction of lowdensity paritycheck (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and thei ..."
Abstract

Cited by 119 (4 self)
 Add to MetaCart
This paper presents a geometric approach to the construction of lowdensity paritycheck (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and their Tanner graphs have girth T. Finitegeometry LDPC codes can be decoded in various ways, ranging from low to high decoding complexity and from reasonably good to very good performance. They perform very well with iterative decoding. Furthermore, they can be put in either cyclic or quasicyclic form. Consequently, their encoding can be achieved in linear time and implemented with simple feedback shift registers. This advantage is not shared by other LDPC codes in general and is important in practice. Finitegeometry LDPC codes can be extended and shortened in various ways to obtain other good LDPC codes. Several techniques of extension and shortening are presented. Long extended finitegeometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.
New upper bounds on the rate of a code via the DelsarteMacWilliams inequalities
 IEEE Transactions on Information Theory
, 1977
"... AbstractWith the DelsarteMacWilliams inequalities as a starting point, an upper bound is obtained on the rate of a binary code as a function of its minimum distance. This upper bound is asymptotically less than Levenshtein’s bound, and so also Elias’s. t “(Xl ..."
Abstract

Cited by 100 (0 self)
 Add to MetaCart
AbstractWith the DelsarteMacWilliams inequalities as a starting point, an upper bound is obtained on the rate of a binary code as a function of its minimum distance. This upper bound is asymptotically less than Levenshtein’s bound, and so also Elias’s. t “(Xl
Fast Key Exchange with Elliptic Curve Systems
, 1995
"... The DiffieHellman key exchange algorithm can be implemented using the group of points on an elliptic curve over the field F 2 n . A software version of this using n = 155 can be optimized to achieve computation rates that are significantly faster than nonelliptic curve versions with a similar leve ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
The DiffieHellman key exchange algorithm can be implemented using the group of points on an elliptic curve over the field F 2 n . A software version of this using n = 155 can be optimized to achieve computation rates that are significantly faster than nonelliptic curve versions with a similar level of security. The fast computation of reciprocals in F 2 n is the key to the highly efficient implementation described here. March 31, 1995 Department of Computer Science The University of Arizona Tucson, AZ 1 Introduction The DiffieHellman key exchange algorithm [10] is a very useful method for initiating a conversation between two previously unintroduced parties. It relies on exponentiation in a large group, and the software implementation of the group operation is usually computationally intensive. The algorithm has been proposed as an Internet standard [13], and the benefit of an efficient implementation would be that it could be widely deployed across a variety of platforms, greatl...
A note on the stochastic realization problem
 Hemisphere Publishing Corporation
, 1976
"... Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizati ..."
Abstract

Cited by 98 (23 self)
 Add to MetaCart
Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizations are characterized and classified with respect to deterministic as well as probabilistic properties. It is shown that only certain realizations (internal stochastic realizations) can be determined from the given output process y. All others (external stochastic realizations)require that the probability space be extended with an exogeneous random component. A complete characterization of the sets of internal and external stochastic realizations is provided. It is shown that the state process of any internal stochastic realization can be expressed in terms of two steadystate KalmanBucy filters, one evolving forward in time over the infinite past and one backward over the infinite future. An algorithm is presented which generates families Of external realizations defined on the same probability space and totally ordered with respect to state covariances. 1. Introduction. One
A Prototype Implementation of Archival Intermemory
 In Proceedings of the 4th ACM Conference on Digital libraries
, 1999
"... An Archival Intermemory solves the problem of highly survivable digital data storage in the spirit of the Internet. In this paper we describe a prototype implementation of Intermemory, including an overall system architecture and implementations of key system components. The result is a working Inte ..."
Abstract

Cited by 96 (1 self)
 Add to MetaCart
An Archival Intermemory solves the problem of highly survivable digital data storage in the spirit of the Internet. In this paper we describe a prototype implementation of Intermemory, including an overall system architecture and implementations of key system components. The result is a working Intermemory that tolerates up to 17 simultaneous node failures, and includes a Web gateway for browserbased access to data. Our work demonstrates the basic feasibility of Intermemory and represents significant progress towards a deployable system.
Towards an Archival Intermemory
 In Proc. of IEEE ADL
, 1998
"... We propose a selforganizing archival Intermemory. That is, a noncommercial subscriberprovided distributed information storage service built on the existing Internet. Given an assumption of continued growth in the memory's total size, a subscriber's participation for only a finite time can neverthe ..."
Abstract

Cited by 88 (1 self)
 Add to MetaCart
We propose a selforganizing archival Intermemory. That is, a noncommercial subscriberprovided distributed information storage service built on the existing Internet. Given an assumption of continued growth in the memory's total size, a subscriber's participation for only a finite time can nevertheless ensure archival preservation of the subscriber's data. Information disperses through the network over time and memories become more difficult to erase as they age. The probability of losing an old memory given random node failures is vanishingly small  and an adversary would have to corrupt hundreds of thousands of nodes to destroy a very old memory. This paper presents a framework for the design of an Intermemory, and considers certain aspects of the design in greater detail. In particular, the aspects of addressing, space efficiency, and redundant coding are discussed. Keywords: Archival Storage, Distributed Redundant Databases, Electronic Publishing, Distributed Algorithms, Error ...
Good Codes based on Very Sparse Matrices
 Cryptography and Coding. 5th IMA Conference, number 1025 in Lecture Notes in Computer Science
, 1995
"... . We present a new family of errorcorrecting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding proble ..."
Abstract

Cited by 80 (11 self)
 Add to MetaCart
. We present a new family of errorcorrecting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding problem involves only very sparse matrices and sparse vectors, and so is a promising candidate for practical decoding. It can be proved that these codes are `very good', in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. We give experimental results using a free energy minimization algorithm and a belief propagation algorithm for decoding, demonstrating practical performance superior to that of both BoseChaudhuryHocquenghem codes and ReedMuller codes over a wide range of noise levels. We regret that lack of space prevents presentation of all our theoretical and experimental results. The full text of this paper may be found elsewher...
Fountain codes
 IEE Communications
, 2005
"... Fountain codes are recordbreaking sparsegraph codes for channels with erasures – such as the internet, where files are transmitted in multiple small packets, each of which is either received without error or not received. Standard filetransfer protocols simply chop a file up into K packetsized p ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
Fountain codes are recordbreaking sparsegraph codes for channels with erasures – such as the internet, where files are transmitted in multiple small packets, each of which is either received without error or not received. Standard filetransfer protocols simply chop a file up into K packetsized pieces, then repeatedly transmit each packet until it is successfully received. A backchannel is required for the transmitter to find out which packets need retransmitting. In contrast, fountain codes make packets that are random functions of the whole file. The transmitter sprays packets at the receiver without any knowledge of which packets are received. Once the receiver has received any N packets, where N is just slightly greater than the original filesize K, he can recover the whole file. In this paper I review random linear fountain codes, LT codes, and raptor codes. The computational costs of the best fountain codes are astonishingly small, scaling linearly with the file size. 1 Erasure channels Channels with erasures are of great importance. For example, files sent over the internet are chopped into packets, and each packet is either received without error or not received. Noisy channels to which good errorcorrecting codes have been applied also behave like erasure channels: much of the time, the errorcorrecting code performs perfectly; occasionally, the decoder fails, and reports that it has failed, so the receiver knows the whole packet has been lost. A simple channel model describing this situation is a qary erasure channel (figure 1), which has (for all inputs in the input alphabet {0, 1, 2,..., q − 1}) a probability 1−f of transmitting the input without error, and probability f of delivering the output ‘?’.
New Algorithms for Finding Irreducible Polynomials over Finite Fields
 Mathematics of Computation
, 1990
"... . We present a new algorithm for finding an irreducible polynomial of specified degree over a finite field. Our algorithm is deterministic, and it runs in polynomial time for fields of small characteristic. We in fact prove the stronger result that the problem of finding irreducible polynomials of s ..."
Abstract

Cited by 65 (5 self)
 Add to MetaCart
. We present a new algorithm for finding an irreducible polynomial of specified degree over a finite field. Our algorithm is deterministic, and it runs in polynomial time for fields of small characteristic. We in fact prove the stronger result that the problem of finding irreducible polynomials of specified degree over a finite field is deterministic polynomial time reducible to the problem of factoring polynomials over the prime field. 1980 Mathematics Subject Classification (1985 revision). Primary 11T06. This research was supported by National Science Foundation grants DCR8504485 and DCR8552596. Appeared in Mathematics of Computation 54, pp. 435447, 1990. A preliminary version of this paper appeared in Proceedings of the 29th Annual Symposium on Foundations of Computer Science, October 1988. 1. Introduction In this paper we present some new algorithms for finding irreducible polynomials over finite fields. Such polynomials are used to implement arithmetic in extension fields ...
ErrorCorrecting Codes for Semiconductor Memory Applications: A StateoftheArt Review
 IBM Journal of Research and Development
, 1984
"... This paper presents a stateoftheart review of errorcorrecting codes for computer semiconductor memory applications. The construction of four classes of errorcorrecting codes appropriate for semiconductor memory designs is described, and for each class of codes the number of check bits required ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
This paper presents a stateoftheart review of errorcorrecting codes for computer semiconductor memory applications. The construction of four classes of errorcorrecting codes appropriate for semiconductor memory designs is described, and for each class of codes the number of check bits required for commonly used data lengths is provided. The implementation aspects of error correction and error detection are also discussed, and certain algorithms useful in extending the errorcorrecting capability for the correction of soft errors such as aparticleinduced errors are examined in some detail.