## Turbo decoding as an instance of Pearl’s belief propagation algorithm (1998)

Venue: | IEEE Journal on Selected Areas in Communications |

Citations: | 339 - 15 self |

### BibTeX

@ARTICLE{Mceliece98turbodecoding,

author = {Robert J. Mceliece and David J. C. Mackay and Jung-fu Cheng},

title = {Turbo decoding as an instance of Pearl’s belief propagation algorithm},

journal = {IEEE Journal on Selected Areas in Communications},

year = {1998},

volume = {16},

pages = {140--152}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl’s belief propagation algorithm. We shall see that if Pearl’s algorithm is applied to the “belief network ” of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the excellent experimental performance of turbo decoding is still lacking. However, we shall also show that Pearl’s algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager’s

### Citations

4567 | A tutorial on hidden Markov models and selected applications in speech recognition
- Rabiner
- 1989
(Show Context)
Citation Context ...s in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Markov chain in the early 1970’s [40], [4] (see also the survey papers [47] and =-=[49]-=-). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 1987 [63]. All of this activity appears to have b... |

1403 | Near Shannon limit error-correcting coding and decoding: Turbo-codes
- Berrou, Glavieux, et al.
- 1993
(Show Context)
Citation Context ...opagation, error-correcting codes, iterative decoding, Pearl’s Algorithm, probabilistic inference, turbo codes. I. INTRODUCTION AND SUMMARY TURBO codes, which were introduced in 1993 by Berrou et al. =-=[10]-=-, are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18]... |

1343 |
Local Computations with Probabilities on Graphical Structures and their Application to Expert Systems
- Lauritzen, Spiegelhalter
- 1988
(Show Context)
Citation Context ...s to have been completely independent of the developments in AI that led to Pearl’s algorithm! 8 There is an “exact” inference algorithm for an arbitrary DAG, developed by Lauritzen and Spiegelhalter =-=[34]-=-, which solves the inference problem with y@x™� t A computations, where x™ is the number of cliques in the undirected triangulated “moralized” graph q� which can be derived from qY and t is the maximu... |

1276 |
Optimal decoding of linear codes for minimizing symbol error rate
- Bahl, Cocke, et al.
- 1974
(Show Context)
Citation Context ...RUARY 1998 One of the keys to the success of turbo codes is to use component codes and for which a low-complexity soft bit decision algorithm exists. For example, the BCJR or “APP” decoding algorithm =-=[4]-=- provides such an algorithm for any code, block or convolutional, that can be represented by a trellis. 4 As far as is known, a code with a low-complexity optimal decoding algorithm cannot achieve hig... |

1262 |
Error bounds for convolution codes and an asymptotically optimal decoding algorithm
- Viterbi
- 1967
(Show Context)
Citation Context ...f low-density parity check codes, and of Gallager’s iterative decoding algorithm. With hindsight, especially in view of the recent work of Wiberg [67], it is now evident that both Viterbi’s algorithm =-=[64]-=-, [23] and the BCJR algorithm [4] can be viewed as a kind of belief propagation. Indeed, Wiberg [66], [67] has generalized Gallager’s algorithm still further, to the point that it now resembles Pearl’... |

982 | Low-Density Parity-Check Codes
- Gallager
- 1962
(Show Context)
Citation Context ... [19]. Gallager’s Low-Density Parity-Check Codes: The earliest suboptimal iterative decoding algorithm is that of Gallager, who devised it as a method of decoding his “low-density parity-check” codes =-=[25]-=-, [26]. This algorithm was later generalized and elaborated upon by Tanner [61] and Wiberg [67]. But as MacKay and Neal [37]–[39] have pointed out, in the first citation of belief propagation by codin... |

969 |
An introduction to Bayesian Networks
- Jensen
- 1996
(Show Context)
Citation Context ... similarity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], =-=[30]-=-, [52], [58], [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms... |

604 |
The computational complexity of probabilistic inference using Bayesian belief network
- Cooper
- 1990
(Show Context)
Citation Context ...unknown random variables, which is required by the brute-force method. The efficiency of belief propagation on trees stands in sharp contrast to the situation for general DAG’s since, in 1990, Cooper =-=[16]-=- showed that the inference problem in general DAG’s is NP hard. (See also [17] and [53] for more on the NP hardness of probabilistic inference in Bayesian networks.) Since the network in Fig. 5 is a t... |

566 | Good Error-Correcting Codes based on Very Sparse Matrices
- MacKay
- 1999
(Show Context)
Citation Context ...alomon Brothers Inc., New York, NY 10048 USA. Publisher Item Identifier S 0733-8716(98)00170-X. 0733–8716/98$10.00 © 1998 IEEE paper that motivated this one, is that of MacKay and Neal [37]. See also =-=[38]-=- and [39].) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or [50]. We will then d... |

486 | Iterative decoding of binary block and convolutional codes
- Hagenauer, Offer, et al.
- 1996
(Show Context)
Citation Context ... exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], =-=[27]-=-, [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo d... |

472 |
A Recursive Approach to Low Complexity Codes
- Tanner
- 1981
(Show Context)
Citation Context ...tive decoding algorithm is that of Gallager, who devised it as a method of decoding his “low-density parity-check” codes [25], [26]. This algorithm was later generalized and elaborated upon by Tanner =-=[61]-=- and Wiberg [67]. But as MacKay and Neal [37]–[39] have pointed out, in the first citation of belief propagation by coding theorists, Gallager’s algorithm is a special kind of BP, with Fig. 10 as the ... |

465 |
Graphical models in applied multivariate statistics
- Whittaker
- 1990
(Show Context)
Citation Context ...the decoding problem. factors according to the graph of Fig. 4 if (4.4) A set of random variables whose density functions factor according to a given DAG is called a directed Markov field [35], [32], =-=[65]-=-. For example, if is a directed chain, then is an ordinary Markov chain. A DAG, together with the associated random variables is called a Bayesian belief network, orBayesian network for short [28]. At... |

391 |
Statistical Inference for Probabilistic Functions of Finite State Markov Chains
- Baum, Petrie
- 1966
(Show Context)
Citation Context ...kward algorithm has a long and convoluted history that merits the attention of a science historian. It seems to have first appeared in the unclassified literature in two independent 1966 publications =-=[6]-=-, [11]. Soon afterwards, it appeared in papers on MAP detection of digital sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of... |

359 | Near Shannon limit performance of low density parity check codes
- MacKay, Neal
- 1996
(Show Context)
Citation Context ...others Inc., New York, NY 10048 USA. Publisher Item Identifier S 0733-8716(98)00170-X. 0733–8716/98$10.00 © 1998 IEEE paper that motivated this one, is that of MacKay and Neal [37]. See also [38] and =-=[39]-=-.) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or [50]. We will then describe P... |

326 | Learning Bayesian Networks: The
- Heckerman, Geiger, et al.
- 1995
(Show Context)
Citation Context ...s, then computing the sum in (4.1) for each possible value of requires additions, which is impractical unless and the ’s are very small numbers. The idea behind the “Bayesian belief network” approach =-=[28]-=-, [51] to this inference problem is to exploit any “partial independencies” which may exist among the ’s to simplify belief updating. The simplest case of this is when the random variables are mutuall... |

298 | Serial concatenation of interleaved codes: Performances analysis, design and iterative decoding
- Benedetto, Montorsi, et al.
- 1996
(Show Context)
Citation Context ...he structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], =-=[9]-=-, [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a solution t... |

290 | Expander codes
- SIPSER, SPIELMAN
- 1996
(Show Context)
Citation Context ...llager’s original decoding algorithm made with powerful modern computers show that their performance is remarkably good, in many cases rivaling that of turbo codes. More recently, Sipser and Spielman =-=[57]-=-, [60] have replaced the “random” parity-check martrices of Gallager and MacKay–Neal with deterministic parity-check matrices with desirable properties, based on “expander” graphs, and have obtained e... |

260 | Approximating probabilistic inference in Bayesian belief networks is NP-hard
- Dagum, Luby
- 1993
(Show Context)
Citation Context ...iciency of belief propagation on trees stands in sharp contrast to the situation for general DAG’s since, in 1990, Cooper [16] showed that the inference problem in general DAG’s is NP hard. (See also =-=[17]-=- and [53] for more on the NP hardness of probabilistic inference in Bayesian networks.) Since the network in Fig. 5 is a tree, Pearl’s algorithm will apply. However, the result is uninteresting: Pearl... |

257 | Unveiling turbo codes: some results on parallel concatenated coding schemes
- Benedetto, Montorsi
(Show Context)
Citation Context ... al. [10], are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing =-=[7]-=-, [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explan... |

206 |
1990. Sequential updating of conditional probabilities on directed graphical structures
- Spiegelhalter, Lauritzen
(Show Context)
Citation Context ...between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], [52], =-=[58]-=-, [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms, including ... |

197 |
Bayesian analysis in expert systems
- Spiegelhalter, Dawid, et al.
- 1993
(Show Context)
Citation Context ...n the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], [52], [58], =-=[59]-=- have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms, including Viterb... |

188 |
Concatenated Codes
- Forney
- 1966
(Show Context)
Citation Context ... codes used by McEliece and Cheng. Serially Concatenated Codes: We have defined a turbo code to be the parallel concatenation of two or more components codes. However, as originally defined by Forney =-=[22]-=-, concatenation is a serial operation. Recently, several researchers [8], [9] have investigated the performance of serially concatenated codes, with turbo-style decoding. This is a nontrivial variatio... |

172 | Probabilistic independence networks for hidden Markov probability models - Smyth, Heckerman, et al. - 1997 |

156 |
Random Fields and Their Applications
- Kindermann, Snell
- 1980
(Show Context)
Citation Context ...on of the decoding problem. factors according to the graph of Fig. 4 if (4.4) A set of random variables whose density functions factor according to a given DAG is called a directed Markov field [35], =-=[32]-=-, [65]. For example, if is a directed chain, then is an ordinary Markov chain. A DAG, together with the associated random variables is called a Bayesian belief network, orBayesian network for short [2... |

153 |
Bayesian updating in recursive graphical models by local computation
- Jensen, Lauritzen, et al.
- 1990
(Show Context)
Citation Context ...ed the similarity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm =-=[29]-=-, [30], [52], [58], [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algo... |

144 |
Independence properties of directed Markov fields
- Lauritzen, Dawid, et al.
- 1990
(Show Context)
Citation Context ...retation of the decoding problem. factors according to the graph of Fig. 4 if (4.4) A set of random variables whose density functions factor according to a given DAG is called a directed Markov field =-=[35]-=-, [32], [65]. For example, if is a directed chain, then is an ordinary Markov chain. A DAG, together with the associated random variables is called a Bayesian belief network, orBayesian network for sh... |

141 |
Probabilistic Inference and Influence Diagrams.” Opns
- Shachter
- 1988
(Show Context)
Citation Context ...n computing the sum in (4.1) for each possible value of requires additions, which is impractical unless and the ’s are very small numbers. The idea behind the “Bayesian belief network” approach [28], =-=[51]-=- to this inference problem is to exploit any “partial independencies” which may exist among the ’s to simplify belief updating. The simplest case of this is when the random variables are mutually inde... |

116 | Iterative decoding of compound codes by probability progation in graphical models
- Kschischang, Frey
- 1998
(Show Context)
Citation Context ...e general method for devising low-complexity iterative decoding algorithms for hybrid coded systems. This is the message of the paper. (A similar message is given in the paper by Kschischang and Frey =-=[33]-=- in this issue.) Here is an outline of the paper. In Section II, we derive some simple but important results about, and introduce some compact notation for, “optimal symbol decision” decoding algorith... |

109 |
Probability propagation
- Shafer, Shenoy
- 1990
(Show Context)
Citation Context ...arity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], =-=[52]-=-, [58], [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms, incl... |

103 |
Codes and iterative decoding on general graphs
- Wiberg, Loeliger, et al.
- 1995
(Show Context)
Citation Context ...pecially in view of the recent work of Wiberg [67], it is now evident that both Viterbi’s algorithm [64], [23] and the BCJR algorithm [4] can be viewed as a kind of belief propagation. Indeed, Wiberg =-=[66]-=-, [67] has generalized Gallager’s algorithm still further, to the point that it now resembles Pearl’s algorithm very closely. (In particular, Wiberg shows that his algorithm can be adapted to produce ... |

101 | Reverend bayes on inference engines: a distributed hierarchical approach
- Pearl
- 1982
(Show Context)
Citation Context ...able simplifications of the probabilistic inference problem. The most important of these simplifications, for our purposes, is Pearl’s belief propagation algorithm. In the 1980’s, Kim and Pearl [31], =-=[42]-=-–[44] showed that if the DAG is a “tree,” i.e., if there are no loops, 6 then there are efficient distributed algorithms for solving the inference problem. If all of the alphabets have the same size P... |

99 |
Finding MAPs for Belief Networks is NP-hard
- Shimony, E
- 1994
(Show Context)
Citation Context ...f belief propagation on trees stands in sharp contrast to the situation for general DAG’s since, in 1990, Cooper [16] showed that the inference problem in general DAG’s is NP hard. (See also [17] and =-=[53]-=- for more on the NP hardness of probabilistic inference in Bayesian networks.) Since the network in Fig. 5 is a tree, Pearl’s algorithm will apply. However, the result is uninteresting: Pearl’s algori... |

97 | A distance spectrum interpretation of turbo codes
- Perez, Seghers, et al.
- 1996
(Show Context)
Citation Context ...ing and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], =-=[45]-=-, and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decodin... |

83 | Good codes based on very sparse matrices
- MacKay, Neal
- 1995
(Show Context)
Citation Context ...Cheng is with Salomon Brothers Inc., New York, NY 10048 USA. Publisher Item Identifier S 0733-8716(98)00170-X. 0733–8716/98$10.00 © 1998 IEEE paper that motivated this one, is that of MacKay and Neal =-=[37]-=-. See also [38] and [39].) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or [50].... |

73 |
Illuminating the structure of code and decoder of parallel concatenated recursive systematic (turbo) codes
- Robertson
- 1994
(Show Context)
Citation Context ... [37]. See also [38] and [39].) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or =-=[50]-=-. We will then describe Pearl’s algorithm, first in its natural “AI” setting, and then show that if it is applied to the “belief network” of a turbo code, the turbo decoding algorithm immediately resu... |

69 | Multiple turbo codes for deep-space communications
- Divsalar, Pollara
- 1995
(Show Context)
Citation Context ...[10], are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], =-=[18]-=-, [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation ... |

67 |
Hidden Markov models: A guided tour
- Poritz
- 1988
(Show Context)
Citation Context ... sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Markov chain in the early 1970’s [40], [4] (see also the survey papers =-=[47]-=- and [49]). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 1987 [63]. All of this activity appears ... |

52 |
Transfer function bounds on the performance of turbo codes
- Divsalar, S, et al.
- 1995
(Show Context)
Citation Context ...are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], =-=[20]-=-, [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why... |

50 |
A computational model for combined causal and diagnostic reasoning in inference systems
- JH, Pearl
- 1983
(Show Context)
Citation Context ...nsiderable simplifications of the probabilistic inference problem. The most important of these simplifications, for our purposes, is Pearl’s belief propagation algorithm. In the 1980’s, Kim and Pearl =-=[31]-=-, [42]–[44] showed that if the DAG is a “tree,” i.e., if there are no loops, 6 then there are efficient distributed algorithms for solving the inference problem. If all of the alphabets have the same ... |

42 |
On receiver structures for channels having memory
- Chang, Hancock
- 1966
(Show Context)
Citation Context ... algorithm has a long and convoluted history that merits the attention of a science historian. It seems to have first appeared in the unclassified literature in two independent 1966 publications [6], =-=[11]-=-. Soon afterwards, it appeared in papers on MAP detection of digital sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Mar... |

36 |
The turbo decision algorithm
- McEliece, Rodemich, et al.
- 1995
(Show Context)
Citation Context ...fined by (3.4) (3.5) The celebrated “turbo decoding algorithm” [10], [50], [3] is an iterative approximation to the optimal beliefs in (3.3) or (3.4), whose performance, while demonstrably suboptimal =-=[41]-=-, has nevertheless proved to be “nearly optimal” in an impressive array of experiments. The heart of the turbo algorithm is an iteratively defined sequence of product probability densities on defined ... |

32 |
Near optimum decoding of products codes
- Pyndiah, Glavieux, et al.
(Show Context)
Citation Context ...perties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], =-=[48]-=-. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a solution to this problem, we... |

24 |
Abstract dynamic programming models under commutativiy conditions
- Verdu, Poor
- 1987
(Show Context)
Citation Context ...4] (see also the survey papers [47] and [49]). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 1987 =-=[63]-=-. All of this activity appears to have been completely independent of the developments in AI that led to Pearl’s algorithm! 8 There is an “exact” inference algorithm for an arbitrary DAG, developed by... |

23 |
Tilborg. A Connection Between Block and Convolutional Codes
- Solomon, van
- 1979
(Show Context)
Citation Context ...s from turbo-style decoding, and we are currently investigating this phenomenon. “Tail-Biting” Convolutional Codes: The class of “tailbiting” convolutional codes introduced by Solomon and van Tilborg =-=[56]-=- is a natural candidate for BP decoding. Briefly, a tail-biting convolutional code is a block code formed by truncating the trellis of a conventional convolutional code and then pasting the ends of th... |

22 | propagation and structuring in belief networks - Fusion - 1986 |

19 |
BMAP bit decoding of convolutional codes
- McAdam, Welch, et al.
- 1972
(Show Context)
Citation Context ... in papers on MAP detection of digital sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Markov chain in the early 1970’s =-=[40]-=-, [4] (see also the survey papers [47] and [49]). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 19... |

10 |
general algorithm for distributing information in a graph
- Aji, McEliece
- 1997
(Show Context)
Citation Context ...oduce both the Gallager–Tanner algorithm and the turbo decoding algorithm.) Finally, having noticed the similarity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece =-=[1]-=-, [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], [52], [58], [59] have devised a simple algorithm for distributing information on a graph that ... |

10 | Unit-memory Hamming turbo codes
- Cheng, McEliece
- 1995
(Show Context)
Citation Context ...ructural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], =-=[12]-=-, [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a solution to this... |

10 | A free energy minimization framework for inference problems in modulo 2 arithmetic
- MacKay
- 1995
(Show Context)
Citation Context ...rk for decoding systematic, low-density generator matrix codes. systematic linear block codes with low-density generator matrices [13]. (This same class of codes appeared earlier in a paper by MacKay =-=[36]-=- in a study of modulo-2 arithmetic inference problems, and in a paper by by Spielman [60] in connection with “error reduction.”) The decoding algorithm devised by Cheng and McEliece was adapted from t... |

6 | On the free distance of turbo codes and related product codes” Final Rep. Diploma project ss195,no 6613,Swiss Federal Institute of Technology
- Seghers
- 1995
(Show Context)
Citation Context ...s the inner (second) encoding, and is the noisy version of Product Codes: A number of researchers have been successful with turbo-style decoding of product codes in two or more dimensions [46], [48], =-=[54]-=-, [27]. In a product code, thes150 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998 Fig. 12. Belief network for decoding a pair of serially concatenated codes. informati... |