Results 1  10
of
58
Iterative decoding of binary block and convolutional codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the ..."
Abstract

Cited by 455 (44 self)
 Add to MetaCart
Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the loglikelihood domain are given not only for convolutional codes hut also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 213. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of lo * the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations. Index Terms Concatenated codes, product codes, iterative decoding, “softinlsoftout ” decoder, “turbo ” (de)coding.
On the Trellis Structure of Block Codes
, 1995
"... The problem of minimizing the vertex count at a given time index in the trellis for a general (nonlinear) code is shown to be NPcomplete. Examples are provided that show that 1) the minimal trellis for a nonlinear code may not be observable, i.e., some codewords may be represented by more than one p ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
The problem of minimizing the vertex count at a given time index in the trellis for a general (nonlinear) code is shown to be NPcomplete. Examples are provided that show that 1) the minimal trellis for a nonlinear code may not be observable, i.e., some codewords may be represented by more than one path through the trellis and 2) minimizing the vertex count at one time index may be incompatible with minimizing the vertex count at another time index. A trellis product is defined and used to construct trellises for sum codes. Minimal trellises for linear codes are obtained by forming the product of elementary trellises corresponding to the onedimensional subcodes generated by atomic codewords. The structure of the resulting trellis is determined solely by the spans of the atomic codewords. A correspondence between minimal linear block code trellises and configurations of nonattacking rooks on a triangular chess board is established and used to show that the number of distinct minimal li...
On the BCJR trellis for linear block codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstruct In this semitutorial paper, we will investigate the computational complexity of an abstract version of the Viterbi algorithm on a trellis, and show that if the trellis has e edges, the complexity of the Viterbi algortithm is @(e). This result suggests that the “best ” trellis representati ..."
Abstract

Cited by 50 (0 self)
 Add to MetaCart
Abstruct In this semitutorial paper, we will investigate the computational complexity of an abstract version of the Viterbi algorithm on a trellis, and show that if the trellis has e edges, the complexity of the Viterbi algortithm is @(e). This result suggests that the “best ” trellis representation for a given linear block code is the one with the fewest edges. We will then show that, among all trellises that represent a given code, the original trellis introduced by Bahl, Cocke, Jelinek, and Raviv in 1974, and later rediscovered by Wolf, Massey, and Forney, uniquely minimizes the edge count, as well as several other figures of merit. Following Forney and Kschischang and Sorokine, we will also discuss “trellisoriented ” or “minimalspan ” generator matrices, which facilitate the calculation of the size of the BCJR trellis, as well as the actual construction of it. Index TermsBlock complexity.
Algorithmic Complexity in Coding Theory and the Minimum Distance Problem
, 1997
"... We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NPhard, and the corresponding decision problem is NPcomplete. This constitutes a proof of the conjecture Bedekamp, McEliece, van T ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NPhard, and the corresponding decision problem is NPcomplete. This constitutes a proof of the conjecture Bedekamp, McEliece, van Tilborg, dating back to 1978. Extensions and applications of this result to other problems in coding theory are discussed.
Soft decoding techniques for codes and lattices, including the Golay code and the Leech lattice
 IEEE Trans. Inform. Theory
, 1986
"... AbstrtiTwo kinds of a&orithms are considered. 1) ff 59 is a binary code of length n, a “soft decision ” decodhg afgorithm for Q changes ao arbitrary point of R ” into a nearest codeword (nearest in Euclideao distance). 2) Similarly, a deco&g afgorithm for a lattice A in R ” changes an arbitraq poin ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
AbstrtiTwo kinds of a&orithms are considered. 1) ff 59 is a binary code of length n, a “soft decision ” decodhg afgorithm for Q changes ao arbitrary point of R ” into a nearest codeword (nearest in Euclideao distance). 2) Similarly, a deco&g afgorithm for a lattice A in R ” changes an arbitraq point of R ” into a closest lattice point. Some general methods are given for constructing such algorithnq and arc used to obtain new and faster decoding algorithms for the C&set lattice E,, the Cofay code and the Leech lattice. L I.
CoarsetoFine Dynamic Programming
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... We introduce an extension of dynamic programming (DP) we call "CoarsetoFine Dynamic Programming" (CFDP), ideally suited to DP problems with large state space. CFDP uses dynamic programming to solve a sequence of coarse approximations which are lower bounds to the original DP problem. These approxi ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We introduce an extension of dynamic programming (DP) we call "CoarsetoFine Dynamic Programming" (CFDP), ideally suited to DP problems with large state space. CFDP uses dynamic programming to solve a sequence of coarse approximations which are lower bounds to the original DP problem. These approximations are developed by merging states in the original graph into "superstates" in a coarser graph which uses an optimistic arc cost between superstates. The approximations are designed so that when CFDP terminates the optimal path through the original state graph has been found. CFDP leads to significant decreases in the amount of computation necessary to solve many DP problems and can, in some instances, make otherwise infeasible computations possible. CFDP generalizes to DP problems with continuous state space and we offer a convergence result for this extension. The computation of the approximations requires that we bound the arc cost over all possible arcs associated with an adjacent pair of superstates; thus the feasibility of our proposed method requires the identification of such a lower bound. We demonstrate applications of this technique to optimization of functions and boundary estimation in mine recognition.
The Trellis Structure of Maximal FixedCost Codes
 IEEE TRANS. INFORM. THEORY
, 1996
"... We show that the family of maximal fixedcost (MFC) codes, with codeword costs defined in a rightcancellative semigroup, are rectangular, and hence admit biproper trellis presentations. Among all possible trellis presentations for a rectangular code, biproper trellises minimize a wide variety of co ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We show that the family of maximal fixedcost (MFC) codes, with codeword costs defined in a rightcancellative semigroup, are rectangular, and hence admit biproper trellis presentations. Among all possible trellis presentations for a rectangular code, biproper trellises minimize a wide variety of complexity measures, including the Viterbi decoding complexity. Examples of MFC codes include such "nonlinear" codes as permutation codes, shells of constant norm in the integer lattice, and linear codes over a finite field. The intersection of two rectangular codes is another rectangular code; therefore "nonlinear" codes such as lattice shells or words of constant weight in a linear code have biproper trellis presentations. We show that every rectangular code can be interpreted as an MFC code. Applications of these results include error detection, trellisbased indexing, and softdecision decoding.
An Efficient Algorithm for Constructing Minimal Trellises for Codes over Finite Abelian Groups
, 1996
"... We present an efficient algorithm for computing the minimal trellis for a group code over a finite Abelian group, given a generator matrix for the code. We also show how to cornpure a succinct representation of the minimal trellis for such a code, andpresent algorithms that use this information to e ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
We present an efficient algorithm for computing the minimal trellis for a group code over a finite Abelian group, given a generator matrix for the code. We also show how to cornpure a succinct representation of the minimal trellis for such a code, andpresent algorithms that use this information to efficiently compute local descriptions of the minimal trellis. This extends the work of Kschischang and Sorokine, who handled the case of linear codes over fields. An important application of our algorithms is to the construction qf minireal trellises for lattices. A key step in our work is handling codes over cyclic groups C'p, where p is a prime. Such a code can be viewed as a submodule over the ring Zp. Because of the presence of zerodivisors in the ring, submodules do not share the useful properties of vector spaces. We get around this difficulty by restricting the notion of linear combination to plinear combination, and introducing the notion of a pgenerator equence, which enjoys properties similar to that of a generector matrix for a vector space.
SoftDecision Decoding of ReedMuller Codes as Generalized Multiple Concatenated Codes
 IEEE Trans. Inform. Theory
, 1995
"... In this paper, we present a new softdecision decoding algorithm for ReedMuller codes. It is based on the GMC decoding algorithm proposed by Schnabl and Bossert [1] which interprets ReedMuller codes as generalized multiple concatenated codes. We extend the GMC algorithm to listdecoding (LGMC). A ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In this paper, we present a new softdecision decoding algorithm for ReedMuller codes. It is based on the GMC decoding algorithm proposed by Schnabl and Bossert [1] which interprets ReedMuller codes as generalized multiple concatenated codes. We extend the GMC algorithm to listdecoding (LGMC). As a result, a SDML decoding algorithm for the first order ReedMuller codes is obtained. Moreover, the performance achieved with LGMC for ReedMuller codes of higher order is considerably better compared to GMC. In particular, for the ReedMuller codes of length ¢¡¤ £ , quasi SDML decoding performance is obtained at a computational complexity that is by far less than optimum decoding using the syndrome trellis [2]. Simulations will also show that for ReedMuller codes up to a length 1024, the performance of LGMC decoding is more than 1dB superior to conventional GMC decoding. 1