Results 1  10
of
181
Iterative decoding of binary block and convolutional codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the ..."
Abstract

Cited by 455 (44 self)
 Add to MetaCart
Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the loglikelihood domain are given not only for convolutional codes hut also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 213. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of lo * the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations. Index Terms Concatenated codes, product codes, iterative decoding, “softinlsoftout ” decoder, “turbo ” (de)coding.
The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract

Cited by 363 (8 self)
 Add to MetaCart
In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. [1] in the case of a binarysymmetric channel and a binary messagepassing algorithm, is a general phenomenon. For the particularly important case of beliefpropagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to lowdensity paritycheck codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.
Unveiling Turbo Codes: Some Results on Parallel Concatenated Coding Schemes
, 1995
"... A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to ..."
Abstract

Cited by 248 (6 self)
 Add to MetaCart
A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, "turbo codes". We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to s...
"Turbo equalization": principles and new results
, 2000
"... Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalization" approach, which ..."
Abstract

Cited by 172 (19 self)
 Add to MetaCart
Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalization" approach, which can be applied to coded data transmission over channels with intersymbol interference (ISI). In the original system invented by Douillard et al., the data is protected by a convolutional code and a receiver consisting of two trellisbased detectors are used, one for the channel (the equalizer) and one for the code (the decoder). It has been shown that iterating equalization and decoding tasks can yield tremendous improvements in bit error rate (BER). We introduce new approaches to combining equalization based on linear ltering with the decoding. The result is a receiver that is capable of improving BER performance through iterations of equalization and decoding in a manner similar to turbo ...
The Capacity of LowDensity Parity Check Codes under MessagePassing Decoding
 IEEE Trans. Inform. Theory
, 1998
"... In this paper we present a general method for determining the capacity of messagepassing decoders applied to low density parity check codes used over any binaryinput memoryless channel with discrete or continuous output alphabets. We show that for almost all codes in a suitably defined ensemble, t ..."
Abstract

Cited by 168 (9 self)
 Add to MetaCart
In this paper we present a general method for determining the capacity of messagepassing decoders applied to low density parity check codes used over any binaryinput memoryless channel with discrete or continuous output alphabets. We show that for almost all codes in a suitably defined ensemble, transmission at rates below this capacity results in error probabilities that approach zero exponentially fast in the length of the code, whereas for transmission at rates above the capacity the error probability stays bounded away from zero. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et. al. [1] in the case of a binary symmetric channel and a binary message passing algorithm, is a general phenomenon. For the particularly important case of belief propagation decoders we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas pre...
Applications of ErrorControl Coding
, 1998
"... An overview of the many practical applications of channel coding theory in the past 50 years is presented. The following application areas are included: deep space communication, satellite communication, data transmission, data storage, mobile communication, file transfer, and digital audio/video t ..."
Abstract

Cited by 161 (0 self)
 Add to MetaCart
An overview of the many practical applications of channel coding theory in the past 50 years is presented. The following application areas are included: deep space communication, satellite communication, data transmission, data storage, mobile communication, file transfer, and digital audio/video transmission. Examples, both historical and current, are given that typify the different approaches used in each application area. Although no attempt is made to be comprehensive in our coverage, the examples chosen clearly illustrate the richness, variety, and importance of errorcontrol coding methods in modern digital applications.
PerSurvivor Processing: A General Approach to MLSE in Uncertain Environments
 IEEE Trans. Commun
, 1995
"... PerSurvivor Processing (PSP) provides a general framework for the approximation of Maximum Likelihood Sequence Estimation (MLSE) algorithms whenever the presence of unknown quantities prevents the precise use of the classical Viterbi algorithm. This principle stems from the idea that dataaided est ..."
Abstract

Cited by 131 (12 self)
 Add to MetaCart
PerSurvivor Processing (PSP) provides a general framework for the approximation of Maximum Likelihood Sequence Estimation (MLSE) algorithms whenever the presence of unknown quantities prevents the precise use of the classical Viterbi algorithm. This principle stems from the idea that dataaided estimation of unknown parameters may be embedded into the structure of the Viterbi algorithm itself. Among the numerous possible applications, we concentrate here on (a) adaptive MLSE, (b) simultaneous Trellis Coded Modulation (TCM) decoding and phase synchronization, (c) adaptive Reduced State Sequence Estimation (RSSE). As a matter of fact, PSP is interpretable as a generalization of decision feedback techniques of RSSE to decoding in the presence of unknown parameters. A number of algorithms for the simultaneous estimation of data sequence andunknown channel parameters are presented and compared with "conventional" techniques based on the use of tentative decisions. Results for uncoded modu...
Optimal and SubOptimal Maximum A Posteriori Algorithms Suitable for Turbo Decoding
 ETT
, 1997
"... For estimating the states or outputs of a Markov process, the symbolbysymbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a ..."
Abstract

Cited by 111 (22 self)
 Add to MetaCart
For estimating the states or outputs of a Markov process, the symbolbysymbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a high number of additions and multiplications. MAP like algorithms operating in the logarithmic domain presented in the past solve the numerical problem and reduce the computational complexity, but are suboptimal especially at low SNR (a common example is the MaxLogMAP because of its use of the max function). A further simplification yields the softoutput Viterbi algorithm (SOVA). In this paper, we present a LogMAP algorithm that avoids the approximations in the MaxLogMAP algorithm and hence is equivalent to the true MAP, but without its major disadvantages. We compare the (Log)MAP, MaxLogMAP and SOVA from a theoretical point of view to illuminate their commonalities and differences. As a practical example forming the basis for simulations, we consider Turbo decoding, where recursive systematic convolutional component codes are decoded with the three algorithms, and we also demonstrate the practical suitability of the LogMAP by including quantization effects. The SOVA is, at 10
FeedbackBased Error Control for Mobile Video Transmission
 Proceedings of the IEEE
, 1999
"... this paper, we discuss such lastlineofdefense 00189219/99$10.00 1999 IEEE PROCEEDINGS OF THE IEEE, VOL. 87, NO. 10, OCTOBER 1999 1707 techniques that can be used to make low bitrate video coders error resilient. We concentrate on techniques that use acknowledgment information provided by a f ..."
Abstract

Cited by 85 (10 self)
 Add to MetaCart
this paper, we discuss such lastlineofdefense 00189219/99$10.00 1999 IEEE PROCEEDINGS OF THE IEEE, VOL. 87, NO. 10, OCTOBER 1999 1707 techniques that can be used to make low bitrate video coders error resilient. We concentrate on techniques that use acknowledgment information provided by a feedback channel
Minimum mean squared error equalization using a priori information
 IEEE Trans. Signal Processing
, 2002
"... Abstract—A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP d ..."
Abstract

Cited by 84 (9 self)
 Add to MetaCart
Abstract—A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP decoder exchange soft information in the form of prior probabilities over the transmitted symbols. A number of reducedcomplexity methods for turbo equalization have recently been introduced in which MAP equalization is replaced with suboptimal, lowcomplexity approaches. In this paper, we explore a number of lowcomplexity softinput/softoutput (SISO) equalization algorithms based on the minimum mean square error (MMSE) criterion. This includes the extension of existing approaches to general signal constellations and the derivation of a novel approach requiring less complexity than the MMSEoptimal solution. All approaches are qualitatively analyzed by observing the meansquare error averaged over a sequence of equalized data. We show that for the turbo equalization application, the MMSEbased SISO equalizers perform well compared with a MAP equalizer while providing a tremendous complexity reduction. Index Terms—Equalization, iterative decoding, low complexity, minimum mean square error. I.