## Decoding and Equalization with Analog Non-linear Networks (1999)

Venue: | EUROPEAN TRANS. COMM |

Citations: | 21 - 4 self |

### BibTeX

@ARTICLE{Hagenauer99decodingand,

author = {Joachim Hagenauer and Elke Offer and Cyril Méasson and Matthias Mörz},

title = {Decoding and Equalization with Analog Non-linear Networks},

journal = {EUROPEAN TRANS. COMM},

year = {1999},

volume = {10},

pages = {659--680}

}

### Years of Citing Articles

### OpenURL

### Abstract

Using analog, non-linear and highly parallel networks, we attempt to perform decoding of block and convolutional codes, equalization of certain frequency-selective channels, decoding of multi-level coded modulation and reconstruction of coded PCM signals. This is in contrast to common practice where these tasks are performed by sequentially operating processors. Our advantage is that we operate fully on soft values for input and output, similar to what is done in `turbo' decoding. However, we do not have explicit iterations because the networks float freely in continuous time. The decoder has almost no latency in time because we are only restricted by the time constants from the parasitic RC values of integrated circuits. Simulation results for several simple examples are shown which, in some cases, achieve the performance of a conventional MAP detector. For more complicated codes we indicate promising solutions with more complex analog networks based on the simple ones. Furthermore,...

### Citations

1403 | Near Shannon limit error-correcting coding and decoding: Turbo-codes
- Berrou, Glavieux, et al.
- 1993
(Show Context)
Citation Context ...ase soft values are better than binary values, a fact already suggested by information theory. This is a first step back to analog. The big success of iterative socalled `turbo' decoding pioneered by =-=[1]-=- and [2] is due to the exchange of soft information between constituent decoders. `Turbo' decoding approaches Shannon's limit very closely: for a code rate 1=2 the gap narrows to 0.5 dB. Still the pro... |

1278 | Factor graphs and the sum-product algorithm
- Kschischang, Frey, et al.
- 2001
(Show Context)
Citation Context ...Koetter [14] and Forney [15]. Others have used a different graphical model, namely the Bayesian networks, to describe the iterative decoding algorithm as a belief propagation algorithm [16], [17]. In =-=[18]-=- a new graphical model called `factor graphs' is presented, which subsumes Tanner graphs and Bayesian networks. Still, all these descriptions assume an algorithm with discrete timing, iterations and p... |

1276 |
Optimal decoding of linear codes for minimizing symbol error rate
- Bahl, Cocke, et al.
- 1974
(Show Context)
Citation Context ... for non-directed, as well as directed variable node elements. In addition, using directed variable node elements we have an analog realization of the forward-backward algorithm or the BCJR algorithm =-=[34]-=-. For the example of the m = 1, r = 1=2 convolutional code Cm1 in tail-biting form (circle size of 8 information bits) we obtain the channel values L c y (1) i and L c y (2) i corresponding to the inf... |

1118 |
Digital Communications
- Proakis
(Show Context)
Citation Context ...s of a receiver is the equalization of the incoming signal. The standard techniques for equalization such as linear equalization, quantized feedback and maximum-likelihood equalization are well-known =-=[39]-=-. Recently, so-called `soft-in/soft-out' equalizers have been employed, which make use of the soft output values of the channel and further deliver soft outputs to the subsequent decoder. The existing... |

982 | Low-Density Parity-Check Codes
- Gallager
- 1962
(Show Context)
Citation Context ...cision decoding algorithm for linear binary block codes. Already 37 years ago, in 1962, Gallager introduced a very efficient iterative decoding algorithm for his low-density parity check (LDPC) codes =-=[12]-=-, see section 5.1. Tanner graphs [13] which are connected via an interleaver have been used to explain the parallel and serial concatenated `turbo' decoding process by Wiberg, Loeliger, Koetter [14] a... |

566 | Good Error-Correcting Codes based on Very Sparse Matrices
- MacKay
- 1999
(Show Context)
Citation Context ...j;l ) 1 A (32) for the message passing from the parity check node j to the variable node x i . Simulation results for LDPC codes, generalized LDPC codes and LDPC convolutional codes can be found in [=-=3-=-2] and [26], respectively. A bit error rate of 10 4 on a Gaussian channel can be achieved with a signal-to-noise ratio of about E b =N 0 = 2.3 dB using a (n = 1008; = 3; = 6) LDPC code and of about ... |

536 |
Neural computation of decisions in optimization problems
- Hop9eld, Tank
- 1985
(Show Context)
Citation Context ...he codes. Neural networks were built, which realize the majority logic decoding or the Viterbi decoding [4]. Furthermore, a classical recurrent or feedback neural network, the analog Hopfield network =-=[7]-=-, can be used as a local optimization algorithm for a special class of codes, the `balance check codes' [3]. These binary codes fulfill the restriction, that in each codeword the positions checked by ... |

486 | Iterative decoding of binary block and convolutional codes
- Hagenauer, Offer, et al.
- 1996
(Show Context)
Citation Context ...nt random variables x 1 and x 2 can therefore be written as x 1 x 2 and x 1 x 2 , respectively. For the corresponding `soft' bits using the real number operation Efx 1 x 2 g = Efx 1 g Efx 2 g [23] we obtain (x 1 x 2 ) = (x 1 ) (x 2 ): (3) L(x ) 3 l 3 (x ) 2 x 1 x 3 x (x ) 3 L L(x ) 1 L(x ) 2 (x ) L 1 (x ) 2 L l 2 (x ) 1 l(x ) Figure 1: GF(2) addition element and the respective elements ... |

472 |
A Recursive Approach to Low Complexity Codes
- Tanner
- 1981
(Show Context)
Citation Context ...binary block codes. Already 37 years ago, in 1962, Gallager introduced a very efficient iterative decoding algorithm for his low-density parity check (LDPC) codes [12], see section 5.1. Tanner graphs =-=[13]-=- which are connected via an interleaver have been used to explain the parallel and serial concatenated `turbo' decoding process by Wiberg, Loeliger, Koetter [14] and Forney [15]. Others have used a di... |

359 | Near Shannon limit performance of low density parity check codes
- MacKay, Neal
- 1996
(Show Context)
Citation Context ...tten for a long time until recently, when MacKay and Neal showed by computer simulations that, for long code length, their performance is comparable to the remarkable performance of the `turbo' codes =-=[25]-=-. Meanwhile the LDPC codes and their decoders have been attracting more and more interest. The concept of LDPC codes has been extended to convolutional codes [26] and to the use of more powerful compo... |

231 |
A.Glavieux, “Iterative correction of intersymbol interference: turbo-equalization
- Douillard, Berrou, et al.
- 1995
(Show Context)
Citation Context ...hl algorithm or approximations of these algorithms [23]. A step further is the so-called `turbo' equalization, which involves feedback and iterations between the outer decoder and the inner equalizer =-=[40]-=-, [41]. 7.1 SYSTEM DESCRIPTION We introduce a new method for joint equalization and decoding for frequency selective fading channels using a highly parallel, analog, feedback network. The analog netwo... |

170 | The capacity of low-density parity check codes under message-passing decoding
- Richardson, Urbanke
- 2001
(Show Context)
Citation Context ...bserved that the `turbo' codes are a special case of these generalized LDPC codes. Furthermore, a theoretical analysis of LDPC codes transmitted over a binary-input memoryless channel can be found in =-=[28]-=-. Based on this analysis, a very powerful new class of LDPC codes with irregular factor graphs were designed in [29]. A LDPC code is called Submission 5 J. Hagenauer, E. Offer, C. Measson, M. Morz x 2... |

141 |
A new multilevel coding method using error correcting codes
- Imai, Hirakawa
- 1977
(Show Context)
Citation Context ...al to `initialize' the channel), interleaver of size 200. 8 MULTI-STAGE DECODING BY ANALOG FEEDBACK NETWORKS OF MULTI-LEVEL CODED 8-PSK Multi-level coded modulation as introduced by Imai and Hirakawa =-=[43]-=- can be decoded by multi-stage decoders. In this iterative process it is advantageous to use soft values and feedback between the different stages. We will show by example that this decoding function ... |

126 |
The turbo principle: Tutorial introduction and state of the art
- Hagenauer
- 1997
(Show Context)
Citation Context ...h improve signal-to-noise ratio [9]. This view caused Berrou et al. [1] to introduce the famous `turbo' decoding scheme. A description of the state of the art `turbo' decoding schemes can be found in =-=[10]-=-. Lucas [11] introduced an iterative soft decision decoding algorithm for linear binary block codes. Already 37 years ago, in 1962, Gallager introduced a very efficient iterative decoding algorithm fo... |

116 | Iterative decoding of compound codes by probability progation in graphical models
- Kschischang, Frey
- 1998
(Show Context)
Citation Context ...erg, Loeliger, Koetter [14] and Forney [15]. Others have used a different graphical model, namely the Bayesian networks, to describe the iterative decoding algorithm as a belief propagation algorithm =-=[16]-=-, [17]. In [18] a new graphical model called `factor graphs' is presented, which subsumes Tanner graphs and Bayesian networks. Still, all these descriptions assume an algorithm with discrete timing, i... |

103 |
Codes and iterative decoding on general graphs
- Wiberg, Loeliger, et al.
- 1995
(Show Context)
Citation Context ...s [12], see section 5.1. Tanner graphs [13] which are connected via an interleaver have been used to explain the parallel and serial concatenated `turbo' decoding process by Wiberg, Loeliger, Koetter =-=[14]-=- and Forney [15]. Others have used a different graphical model, namely the Bayesian networks, to describe the iterative decoding algorithm as a belief propagation algorithm [16], [17]. In [18] a new g... |

102 |
Fundamentals of Convolutional Coding
- Johannesson, Zigangirov
- 1999
(Show Context)
Citation Context ...he m = 2, r = 1=2 convolutional code shown in figure 3 and figure 7, respectively. Additional information for the initialization can be obtained from the `quick-look-in' (QLI) properties of the codes =-=[-=-35]. With the analog decoder network for the m = 2, r = 1=2 convolutional code Cm2 based on the generator matrix (see figure 7), we used the initialization L(^u i ) init = a L c y (1) i+1 L c y (2)... |

63 |
A precise four-quadrant multiplier with subnanosecond response
- Gilbert
- 1968
(Show Context)
Citation Context ...o implement the sum-product algorithm in analog VLSI [19], [20]. For cycle-free factor graphs the sum-product algorithm is equivalent to the symbol-bysymbol MAP decoder. They use a Gilbert multiplier =-=[21]-=- and modified versions of it to implement the probability multiplications of the sum-product algorithm. Inputs and outputs of the transistor circuit are hereby currents representing probabilities. For... |

38 | Iterative equalization and decoding in mobile communications systems”, 2nd European Personal Mobile Communications Conference, Sept./Oct
- Bauch, Khorram, et al.
- 1997
(Show Context)
Citation Context ...orithm or approximations of these algorithms [23]. A step further is the so-called `turbo' equalization, which involves feedback and iterations between the outer decoder and the inner equalizer [40], =-=[41]-=-. 7.1 SYSTEM DESCRIPTION We introduce a new method for joint equalization and decoding for frequency selective fading channels using a highly parallel, analog, feedback network. The analog network is ... |

28 | Design of provably good low-density parity check codes
- Richardson, Shokrollahi, et al.
- 1999
(Show Context)
Citation Context ...is of LDPC codes transmitted over a binary-input memoryless channel can be found in [28]. Based on this analysis, a very powerful new class of LDPC codes with irregular factor graphs were designed in =-=[29-=-]. A LDPC code is called Submission 5 J. Hagenauer, E. Offer, C. Measson, M. Morz x 2 x8 x 20 x12 x11 x10 x 17 x 5 x 13 x 15 x7 x 6 x16 x 19 x 4 x 9 x18 x 14 x 3 x1 Figure 4: Factor graph of a (n=20,=... |

27 |
Digital Sound Broadcasting to Mobile Receivers
- LeFloch, Halbert-Lassalle, et al.
- 1989
(Show Context)
Citation Context ... as a prefix. The prefix data is transmitted first and the corresponding received channel values y i are cut off. In OFDM transmission theory the block of length N 1 +N 2 is called the guard interval =-=[42]-=-. Hence, we have the equivalent of a convolutional code in tail-biting form with the difference that the encoding is performed by the frequency selective channel itself. Hence, the analog equalizer fo... |

22 |
On iterative soft-decision decoding of linear binary block codes and product codes
- Lucas, Bossert, et al.
- 1998
(Show Context)
Citation Context ...gnal-to-noise ratio [9]. This view caused Berrou et al. [1] to introduce the famous `turbo' decoding scheme. A description of the state of the art `turbo' decoding schemes can be found in [10]. Lucas =-=[11]-=- introduced an iterative soft decision decoding algorithm for linear binary block codes. Already 37 years ago, in 1962, Gallager introduced a very efficient iterative decoding algorithm for his low-de... |

20 |
Time-varying periodic convolutional codes with low-density parity-check matrix
- Felstrom, Zigangirov
- 1999
(Show Context)
Citation Context ...able performance of the `turbo' codes [25]. Meanwhile the LDPC codes and their decoders have been attracting more and more interest. The concept of LDPC codes has been extended to convolutional codes =-=[26]-=- and to the use of more powerful component codes, instead of the single parity check code [13], [27]. It can be observed that the `turbo' codes are a special case of these generalized LDPC codes. Furt... |

18 |
Iterative decoding of generalized low-density parity-check codes
- Lentmaier, Ziganfirov
- 1998
(Show Context)
Citation Context ...tracting more and more interest. The concept of LDPC codes has been extended to convolutional codes [26] and to the use of more powerful component codes, instead of the single parity check code [13], =-=[27]-=-. It can be observed that the `turbo' codes are a special case of these generalized LDPC codes. Furthermore, a theoretical analysis of LDPC codes transmitted over a binary-input memoryless channel can... |

17 |
New Wideband Amplifier Technique
- Gilbert, A
- 1968
(Show Context)
Citation Context ...transistor circuit are hereby currents representing probabilities. For the necessary probability normal2 ETT Decoding and Equalization with Analog Non-linear Networks ization they follow the ideas in =-=[22]-=-. The outline of the paper is as follows. We will first describe log-likelihood values of bits on several channels. With factor graphs [18] we have a very useful and simple notation to describe severa... |

14 | BiCMOS circuits for analog Viterbi decoders
- Shakiba, Johns, et al.
- 1998
(Show Context)
Citation Context ...?) to the analog world. 2 PREVIOUS AND RELATED WORK In the literature we find several other approaches to develop an analog network for decoding. A good literature survey can be found in [3], [4] and =-=[5]-=-. The first group to mention are those working with artificial neural networks. A neural network consists of the connection of a set of simple processing units. Each unit or neuron performs simple alg... |

14 |
Turbo decoding as an instance of Pearl's "Belief Propagation" algorithm
- McEliece, MacKay, et al.
- 1998
(Show Context)
Citation Context ...oeliger, Koetter [14] and Forney [15]. Others have used a different graphical model, namely the Bayesian networks, to describe the iterative decoding algorithm as a belief propagation algorithm [16], =-=[17]-=-. In [18] a new graphical model called `factor graphs' is presented, which subsumes Tanner graphs and Bayesian networks. Still, all these descriptions assume an algorithm with discrete timing, iterati... |

13 |
Digital Coding of Waveforms--Principles and Applications to Speech and Video Englewood Cliffs
- Jayant, Noll
- 1984
(Show Context)
Citation Context ...ource decoding can be combined on one analog VLSI chip. 9.1 PULSE CODE MODULATION---THE WAY FROM ANALOG VALUES TO BITS A simple and well understood digitizing technique is Pulse Code Modulation (PCM) =-=[45]-=-. It is a very versatile coding system which is not limited only to speech signals. Almost all speech encoders involve some kind of PCM in their encoders. In order to get a digital representation of a... |

9 |
On iterative decoding and the two-way algorithm
- Forney
- 1997
(Show Context)
Citation Context ...ion 5.1. Tanner graphs [13] which are connected via an interleaver have been used to explain the parallel and serial concatenated `turbo' decoding process by Wiberg, Loeliger, Koetter [14] and Forney =-=[15]-=-. Others have used a different graphical model, namely the Bayesian networks, to describe the iterative decoding algorithm as a belief propagation algorithm [16], [17]. In [18] a new graphical model c... |

7 |
Iterative sum-product decoding with analog VLSI
- Loeliger, Helfenstein, et al.
- 1998
(Show Context)
Citation Context ...ing strategy, while implementing the decoding techniques in analog hardware. Parallel to our work Loeliger et al. have started a similar approach to implement the sum-product algorithm in analog VLSI =-=[19]-=-, [20]. For cycle-free factor graphs the sum-product algorithm is equivalent to the symbol-bysymbol MAP decoder. They use a Gilbert multiplier [21] and modified versions of it to implement the probabi... |

4 |
A nonalgorithmic maximum likelihood decoder for trellis codes
- Davis, Loeliger
- 1993
(Show Context)
Citation Context ...il now, no good codes, or even an indication that there are good codes in this class, have been found. Another non-algorithmic approach to channel decoding was published in 1993 by Davis and Loeliger =-=[8]-=-. Their `diode decoder' is based on the trellis representation of the code and consists of diodes and switches. Using the received symbols to determine the number of diodes in the corresponding trelli... |

4 |
Properties and error performance of the tailbiting BCJR decoder
- Anderson, Tepe
- 1998
(Show Context)
Citation Context ...ed for tail-biting codes as for terminated codes. To obtain the full error correction capabilities of the code, the circle size, i.e., the block length of the code, has to fulfill certain constraints =-=[31]-=-. For the m = 2, r = 1=2 convolutional code, a circle size of at least 11 information bits is necessary to exploit the full error correction capability of the code on an AWGN channel. Since we wanted ... |

3 |
Separable MAP "filters" for the decoding of product and concatenated codes
- Lodge, Young, et al.
- 1993
(Show Context)
Citation Context ... values are better than binary values, a fact already suggested by information theory. This is a first step back to analog. The big success of iterative socalled `turbo' decoding pioneered by [1] and =-=[2]-=- is due to the exchange of soft information between constituent decoders. `Turbo' decoding approaches Shannon's limit very closely: for a code rate 1=2 the gap narrows to 0.5 dB. Still the processing ... |

3 |
Approaches to Neural-Network Decoding of Error-Correcting Codes
- Wiberg
- 1994
(Show Context)
Citation Context ...d (or forward?) to the analog world. 2 PREVIOUS AND RELATED WORK In the literature we find several other approaches to develop an analog network for decoding. A good literature survey can be found in =-=[3]-=-, [4] and [5]. The first group to mention are those working with artificial neural networks. A neural network consists of the connection of a set of simple processing units. Each unit or neuron perfor... |

3 |
An artificial neural net Viterbi decoder
- Wang, Wicker
- 1996
(Show Context)
Citation Context ... forward?) to the analog world. 2 PREVIOUS AND RELATED WORK In the literature we find several other approaches to develop an analog network for decoding. A good literature survey can be found in [3], =-=[4]-=- and [5]. The first group to mention are those working with artificial neural networks. A neural network consists of the connection of a set of simple processing units. Each unit or neuron performs si... |

3 |
Codes and decoding on general graphs,” Linköping Studies in
- Wiberg
- 1996
(Show Context)
Citation Context ...I 1 Figure 3: Factor graph of the (7,4) Hamming code based on the parity check matrix (Tanner graph). many variables factors into a product of `local' functions. This description is based on [14] and =-=[24]-=-, where generalized Tanner graphs with hidden variable nodes are introduced. The graph subsumes many other graphical models like Markov random fields, Bayesian networks or belief networks, and graphs ... |

2 |
An efficient neural decoder for convolutional codes
- Marcone, Zincolini, et al.
- 1995
(Show Context)
Citation Context ...network. However, due to the problem of large training sets, good results were only reported for small codes (e.g., the (7,4) Hamming code or convolutional codes with memory less than or equal to two =-=[6]-=-), whereas more efficient neural decoders are based on fixed-weight and training-free networks [4]. Here the neural network design is based on known digital decoding algorithms, which fully exploits t... |

2 |
The Internet and Economic Growth in Least Developed Countries: A Case of Managing Expectations
- Hagenauer
- 1992
(Show Context)
Citation Context ...called `extrinsic' information while working alternately on the component or sub-codes. The `soft-in/soft-out' decoders can be viewed as decimating digital filters which improve signal-to-noise ratio =-=[9]-=-. This view caused Berrou et al. [1] to introduce the famous `turbo' decoding scheme. A description of the state of the art `turbo' decoding schemes can be found in [10]. Lucas [11] introduced an iter... |

2 |
Optimal and near-optimal for short and moderate-length tailbiting trellises
- Stahl, Anderson, et al.
- 1999
(Show Context)
Citation Context ...ding trellis section for the m=2, r=1/2 convolutional code with generator polynomials (111, 101). We will focus our attention on tail-biting convolutional codes for the following reasons: As shown in =-=[30]-=-, tailbiting convolutional codes achieve the minimum distance of most of the best known block codes for short and medium block sizes. Furthermore, we do not need any termination bits avoiding a higher... |

1 |
Probability propagation in analog VLSI. Unpublished manuscript. Available at http://www.endora.ch/papers.html
- Loeliger, Lustenberger, et al.
(Show Context)
Citation Context ...rategy, while implementing the decoding techniques in analog hardware. Parallel to our work Loeliger et al. have started a similar approach to implement the sum-product algorithm in analog VLSI [19], =-=[20]-=-. For cycle-free factor graphs the sum-product algorithm is equivalent to the symbol-bysymbol MAP decoder. They use a Gilbert multiplier [21] and modified versions of it to implement the probability m... |

1 |
Analog decoders and their implementation in VLSI
- Morz
- 1999
(Show Context)
Citation Context ...circuit for the `box-plus'. This similarity is a great advantage for the layout of the VLSI chip. Typically the `box-plus' circuit and the variable sum circuit are each realized with nine transistors =-=[33]-=-. Details of our circuit design will be given in a future publication. 10 ETT Decoding and Equalization with Analog Non-linear Networks 5.4 SIMPLE EXAMPLES FOR ANALOG DECODERS BASED ON FACTOR GRAPHS W... |

1 |
Analoge Decodierung von Block- und Faltungscodes
- Winkelhofer
- 1998
(Show Context)
Citation Context ...for the (7,4) Hamming code, the corresponding (7,3) dual code, the m = 1 and the m = 2, r = 1=2 convolutional code are efficient, even if they are based on binary factor graphs with very short cycles =-=[36]-=-, [37]. It can be observed that the most efficient LDPC codes or `turbo' coding schemes are also characterized by sparsely connected variable nodes [25]. Since the limitation to factor graphs with onl... |

1 |
Analog decoding on graphs with cycles
- Measson
- 1999
(Show Context)
Citation Context ...e (7,4) Hamming code, the corresponding (7,3) dual code, the m = 1 and the m = 2, r = 1=2 convolutional code are efficient, even if they are based on binary factor graphs with very short cycles [36], =-=[37]-=-. It can be observed that the most efficient LDPC codes or `turbo' coding schemes are also characterized by sparsely connected variable nodes [25]. Since the limitation to factor graphs with only bina... |

1 |
Decodierung mit Qualitatsinformation bei verketteten Codiersystemen
- Offer
- 1995
(Show Context)
Citation Context ... a trellis for a binary symmetric channel where u l;i is an input variable, u T;i a fictive transition variable, and u r;i is an output variable as shown in figure 20. Following the ideas in [23] and [38] it can be shown that the log-likelihood ratio for the transition at butterfly k and time i can be expressed as L (k) (u T;i ) = N X =1 L c y () i x () w i : (39) In the decoder network this va... |

1 |
Vom Analogwert zum Bit und zuruck
- Hagenauer
- 1997
(Show Context)
Citation Context ...the satisfactory behavior of this simple analog decoder network. 9 ANALOG NETWORKS FOR SOURCE DECODING Analog networks for equalization and channel decoding can be further extended to source decoding =-=[44]-=-. Since our analog decoder produces soft outputs, we can get a slight improvement in performance using the analog output values or the `soft' bits of the channel decoder instead Submission 17 J. Hagen... |