Results 1  10
of
310
Iterative decoding of binary block and convolutional codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the ..."
Abstract

Cited by 600 (43 self)
 Add to MetaCart
Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the loglikelihood domain are given not only for convolutional codes hut also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 213. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of lo * the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations. Index Terms Concatenated codes, product codes, iterative decoding, “softinlsoftout ” decoder, “turbo ” (de)coding.
The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract

Cited by 569 (9 self)
 Add to MetaCart
(Show Context)
In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. [1] in the case of a binarysymmetric channel and a binary messagepassing algorithm, is a general phenomenon. For the particularly important case of beliefpropagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to lowdensity paritycheck codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.
Unveiling Turbo Codes: Some Results on Parallel Concatenated Coding Schemes
, 1995
"... A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to ..."
Abstract

Cited by 315 (6 self)
 Add to MetaCart
A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, "turbo codes". We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to s...
Applications of ErrorControl Coding
, 1998
"... An overview of the many practical applications of channel coding theory in the past 50 years is presented. The following application areas are included: deep space communication, satellite communication, data transmission, data storage, mobile communication, file transfer, and digital audio/video t ..."
Abstract

Cited by 274 (0 self)
 Add to MetaCart
An overview of the many practical applications of channel coding theory in the past 50 years is presented. The following application areas are included: deep space communication, satellite communication, data transmission, data storage, mobile communication, file transfer, and digital audio/video transmission. Examples, both historical and current, are given that typify the different approaches used in each application area. Although no attempt is made to be comprehensive in our coverage, the examples chosen clearly illustrate the richness, variety, and importance of errorcontrol coding methods in modern digital applications.
"Turbo equalization": principles and new results
, 2000
"... Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalizati ..."
Abstract

Cited by 271 (24 self)
 Add to MetaCart
(Show Context)
Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalization" approach, which can be applied to coded data transmission over channels with intersymbol interference (ISI). In the original system invented by Douillard et al., the data is protected by a convolutional code and a receiver consisting of two trellisbased detectors are used, one for the channel (the equalizer) and one for the code (the decoder). It has been shown that iterating equalization and decoding tasks can yield tremendous improvements in bit error rate (BER). We introduce new approaches to combining equalization based on linear ltering with the decoding. The result is a receiver that is capable of improving BER performance through iterations of equalization and decoding in a manner similar to turbo ...
The capacity of lowdensity parity check codes under messagepassing decoding
 IEEE Trans. Inform. Theory
, 2001
"... ..."
(Show Context)
Persurvivor processing: A general approach to MLSE in uncertain environments
 IEEE Trans. Communications
, 1995
"... ..."
(Show Context)
Optimal and SubOptimal Maximum A Posteriori Algorithms Suitable for Turbo Decoding
 ETT
, 1997
"... For estimating the states or outputs of a Markov process, the symbolbysymbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a ..."
Abstract

Cited by 154 (26 self)
 Add to MetaCart
(Show Context)
For estimating the states or outputs of a Markov process, the symbolbysymbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a high number of additions and multiplications. MAP like algorithms operating in the logarithmic domain presented in the past solve the numerical problem and reduce the computational complexity, but are suboptimal especially at low SNR (a common example is the MaxLogMAP because of its use of the max function). A further simplification yields the softoutput Viterbi algorithm (SOVA). In this paper, we present a LogMAP algorithm that avoids the approximations in the MaxLogMAP algorithm and hence is equivalent to the true MAP, but without its major disadvantages. We compare the (Log)MAP, MaxLogMAP and SOVA from a theoretical point of view to illuminate their commonalities and differences. As a practical example forming the basis for simulations, we consider Turbo decoding, where recursive systematic convolutional component codes are decoded with the three algorithms, and we also demonstrate the practical suitability of the LogMAP by including quantization effects. The SOVA is, at 10
Minimum mean squared error equalization using a priori information
 IEEE TRANS. SIGNAL PROCESSING
, 2002
"... A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP decoder e ..."
Abstract

Cited by 147 (11 self)
 Add to MetaCart
(Show Context)
A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP decoder exchange soft information in the form of prior probabilities over the transmitted symbols. A number of reducedcomplexity methods for turbo equalization have recently been introduced in which MAP equalization is replaced with suboptimal, lowcomplexity approaches. In this paper, we explore a number of lowcomplexity softinput/softoutput (SISO) equalization algorithms based on the minimum mean square error (MMSE) criterion. This includes the extension of existing approaches to general signal constellations and the derivation of a novel approach requiring less complexity than the MMSEoptimal solution. All approaches are qualitatively analyzed by observing the meansquare error averaged over a sequence of equalized data. We show that for the turbo equalization application, the MMSEbased SISO equalizers perform well compared with a MAP equalizer while providing a tremendous complexity reduction.
PPR: Partial Packet Recovery for Wireless Networks
 In ACM SIGCOMM
, 2007
"... Bit errors occur in wireless communication when interference or noise overcomes the coded and modulated transmission. Current wireless protocols may use forward error correction (FEC) to correct some small number of bit errors, but generally retransmit the whole packet if the FEC is insufficient. We ..."
Abstract

Cited by 116 (6 self)
 Add to MetaCart
(Show Context)
Bit errors occur in wireless communication when interference or noise overcomes the coded and modulated transmission. Current wireless protocols may use forward error correction (FEC) to correct some small number of bit errors, but generally retransmit the whole packet if the FEC is insufficient. We observe that current wireless mesh network protocols retransmit a number of packets and that most of these retransmissions end up sending bits that have already been received multiple times, wasting network capacity. To overcome this inefficiency, we develop, implement, and evaluate a partial packet recovery (PPR) system. PPR incorporates two new ideas: (1) SoftPHY, an expanded physical layer (PHY) interface that provides PHYindependent hints to higher layers about the PHY’s confidence in each bit it decodes, and (2) a postamble scheme to recover data even when a packet preamble is corrupted and not decodable at the receiver. Finally, we present PPARQ, an asynchronous linklayer ARQ protocol built on PPR that allows a receiver to compactly encode a request for retransmission of only those bits in a packet that are likely in error. Our experimental results from a 31node Zigbee (802.15.4) testbed that includes Telos motes with 2.4 GHz Chipcon radios and GNU Radio nodes implementing the 802.15.4 standard show that PPARQ increases endtoend capacity by a factor of 2× under moderate load.