Results 1  10
of
112
Iterative decoding of binary block and convolutional codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the ..."
Abstract

Cited by 600 (43 self)
 Add to MetaCart
Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the loglikelihood domain are given not only for convolutional codes hut also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 213. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of lo * the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations. Index Terms Concatenated codes, product codes, iterative decoding, “softinlsoftout ” decoder, “turbo ” (de)coding.
On the BCJR trellis for linear block codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstruct In this semitutorial paper, we will investigate the computational complexity of an abstract version of the Viterbi algorithm on a trellis, and show that if the trellis has e edges, the complexity of the Viterbi algortithm is @(e). This result suggests that the “best ” trellis representati ..."
Abstract

Cited by 72 (0 self)
 Add to MetaCart
(Show Context)
Abstruct In this semitutorial paper, we will investigate the computational complexity of an abstract version of the Viterbi algorithm on a trellis, and show that if the trellis has e edges, the complexity of the Viterbi algortithm is @(e). This result suggests that the “best ” trellis representation for a given linear block code is the one with the fewest edges. We will then show that, among all trellises that represent a given code, the original trellis introduced by Bahl, Cocke, Jelinek, and Raviv in 1974, and later rediscovered by Wolf, Massey, and Forney, uniquely minimizes the edge count, as well as several other figures of merit. Following Forney and Kschischang and Sorokine, we will also discuss “trellisoriented ” or “minimalspan ” generator matrices, which facilitate the calculation of the size of the BCJR trellis, as well as the actual construction of it. Index TermsBlock complexity.
Soft decision decoding of linear block codes based on ordered statistics
, 1994
"... AbstractThis paper presents a novel approach to soft decision decoding for binary linear block codes. The basic idea of this approach is to achieve a desired error performance progressively in a number of stages. For each decoding stage, the error performance is tightly bounded and the decoding is ..."
Abstract

Cited by 65 (8 self)
 Add to MetaCart
AbstractThis paper presents a novel approach to soft decision decoding for binary linear block codes. The basic idea of this approach is to achieve a desired error performance progressively in a number of stages. For each decoding stage, the error performance is tightly bounded and the decoding is terminated at the stage where either nearoptimum error performance or a desired level of error performance is achieved. As a result, more flexibility in the tradeoff between performance and decoding complexity is provided. The proposed decoding is based on the reordering of the received symbols according to their reliability measure. In the paper, the statistics of the noise after ordering are evaluated. Based on these statistics, two monotonic properties which dictate the reprocessing strategy are derived. Each codeword is decoded in two steps: 1) harddecision decoding based on reliability information and 2) reprocessing of the harddecisiondecoded codeword in successive stages until the desired performance is achieved. The reprocessing is based on the monotonic properties of the ordering and is carried out using a cost function. A new resource test tightly related to the reprocessing strategy is introduced to reduce the number of computations at each reprocessing stage. For short codes of lengths1 5 32 or medium codes with 32 < S 5 64 with rate R 2 0.6, nearoptimum bit error performance is achieved in two stages of reprocessing with at most a computation complexity of o ( I< ') constructed codewords, where I< is the dimension of the code. For longer codes, three or more reprocessing stages are required to achieve nearoptimum decoding. However, most of the coding gain is obtained within the first two reprocessing stages for error performances of practical interest. The proposed decoding algorithm applies to any binary linear code, does not require any data storage, and is well suitable for parallel processing. Furthermore, the maximum number of computations required at each reprocessing stage is fixed, which prevents buffer overtlow at low SNR. Index TermsMaximumlikelihood decoding, block codes, ordered statistics, reliability information.
Algorithmic Complexity in Coding Theory and the Minimum Distance Problem
, 1997
"... We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NPhard, and the corresponding decision problem is NPcomplete. This constitutes a proof of the conjecture Bedekamp, McEliece, van T ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NPhard, and the corresponding decision problem is NPcomplete. This constitutes a proof of the conjecture Bedekamp, McEliece, van Tilborg, dating back to 1978. Extensions and applications of this result to other problems in coding theory are discussed.
Soft decoding techniques for codes and lattices, including the Golay code and the Leech lattice
 IEEE Trans. Inform. Theory
, 1986
"... AbstrtiTwo kinds of a&orithms are considered. 1) ff 59 is a binary code of length n, a “soft decision ” decodhg afgorithm for Q changes ao arbitrary point of R ” into a nearest codeword (nearest in Euclideao distance). 2) Similarly, a deco&g afgorithm for a lattice A in R ” changes an arbit ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
(Show Context)
AbstrtiTwo kinds of a&orithms are considered. 1) ff 59 is a binary code of length n, a “soft decision ” decodhg afgorithm for Q changes ao arbitrary point of R ” into a nearest codeword (nearest in Euclideao distance). 2) Similarly, a deco&g afgorithm for a lattice A in R ” changes an arbitraq point of R ” into a closest lattice point. Some general methods are given for constructing such algorithnq and arc used to obtain new and faster decoding algorithms for the C&set lattice E,, the Cofay code and the Leech lattice. L I.
CoarsetoFine Dynamic Programming
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... We introduce an extension of dynamic programming (DP) we call "CoarsetoFine Dynamic Programming" (CFDP), ideally suited to DP problems with large state space. CFDP uses dynamic programming to solve a sequence of coarse approximations which are lower bounds to the original DP problem. The ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
We introduce an extension of dynamic programming (DP) we call "CoarsetoFine Dynamic Programming" (CFDP), ideally suited to DP problems with large state space. CFDP uses dynamic programming to solve a sequence of coarse approximations which are lower bounds to the original DP problem. These approximations are developed by merging states in the original graph into "superstates" in a coarser graph which uses an optimistic arc cost between superstates. The approximations are designed so that when CFDP terminates the optimal path through the original state graph has been found. CFDP leads to significant decreases in the amount of computation necessary to solve many DP problems and can, in some instances, make otherwise infeasible computations possible. CFDP generalizes to DP problems with continuous state space and we offer a convergence result for this extension. The computation of the approximations requires that we bound the arc cost over all possible arcs associated with an adjacent pair of superstates; thus the feasibility of our proposed method requires the identification of such a lower bound. We demonstrate applications of this technique to optimization of functions and boundary estimation in mine recognition.
Maximumlikelihood soft decision decoding of BCH codes
 IEEE Trans. Inform. Theory
, 1994
"... Abstract. The problem of efficient maximumlikelihood soft decision decoding of binary BCH codes is considered. It is known that those primitive BCH codes whose designed distance is one less than a power of two, contain subcodes of high dimension which consist of a direct sum of several identical c ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The problem of efficient maximumlikelihood soft decision decoding of binary BCH codes is considered. It is known that those primitive BCH codes whose designed distance is one less than a power of two, contain subcodes of high dimension which consist of a direct sum of several identical codes. We show that the same kind of directsum structure exists in all the primitive BCH codes. as well as in the BCH codes of composite block length. We also introduce a related structure termed the “concurringsum”, and then establish its existence in the primitive binary BCH codes. Both structures are employed to upper bound the number of states in the proper minimal trellis of BCH codes, and develop efficient algorithms for maximumlikelihood soft decision decoding of these codes. In [2] Forney has shown that the binary ReedMuller codes contain directsum subcodes of high dimension. It is well known that certain BCH codes, namely the primitive binary BCH codes with designed distance one less than a power of two, are supercodes of punctured ReedMuller codes. Hence these BCH codes evidently share the directsum structure of the RM codes. This