### Table 1: COVQ performance (SNR in dB) over channel 1. Results for rate 2, 1 and 0.5, 4-dimensional VQ for Gauss-Markov and iid Gaussian sources, for soft and Viterbi decoding. For the soft decoders = 1. The Viterbi decoders all use V = 40. Gauss-Markov iid Gauss

"... In PAGE 5: ...ver Viterbi is about 1.9 dB in CSNR. Also, as expected, the random IA plus Viterbi decoding performs very poorly, illustrating the importance of a good IA. Table1 shows the performance of di erent COVQ schemes, employing optimal and Viterbi decoding for Gauss-Markov (a = 0:9) and iid Gaussian sources. The performance was measured at the same CSNRs as for which the COVQs were trained (perfect match).... ..."

### Table 1: COVQ performance (SNR in dB) over channel 1. All systems were evaluated at the same CSNRs as for which they were designed. Results are shown for rate 2, 1 and 0.5, 4-dimensional VQ for Gauss-Markov and iid Gaussian sources. The listed results are for the M1, M2, and Viterbi decoders. For all M1 and M2 decoders = 1. The Viterbi decoders all use V = 40.

"... In PAGE 18: ... In the simulations we do not claim to have chosen parameters \optimally quot;; most of our choices have been ad hoc in nature. In the simulations presented in Figures 6{8 and in Table1 , we have used a delay equal to the channel memory length ( = M). In Figure 9 we, however, also investigate the impact of di erent delays.... In PAGE 22: ... Table1 shows the performance of di erent COVQ schemes. The performance was measured at the same CSNRs as for which the COVQs were trained (perfect match).... ..."

### Table 1: The optimized switching levels fn of the joint adaptive modulation and DFE for speech and data trans- mission over the TU Rayleigh fading channel.

2000

"... In PAGE 7: ... The switching levels used in these experiments are listed in Table 1. The mean BER and BPS performances were then nu- merically calculated utilizing Equations 11 and 12 and the switching levels listed in Table1 for speech and data trans- mission. The results are shown in Figure 7 for the COST 207 TU Rayleigh fading channel of Figure 2.... ..."

Cited by 3

### Table 2. Viterbi algorithm simulation data runs.

1970

"... In PAGE 53: ...pecified in section 3. 2. 1, we defined an error burst to be any segment of the decoded source sequence with the following properties: The sequence begins and ends with decoding errors; it contains no error-free subsequences of v or more consecutive sym- bols; and it is immediately preceded and followed by error-free intervals of at least v consecutive symbols. Table2 lists the data runs that were obtained from this simu- lation program. For simplicity, it was assumed in the computer program that the all-zero source sequence was supplied to the encoder, and that the all-zero source sequence produced the all-zero channel sequence.... In PAGE 53: ... From the description of the manner in which the computer simulated the v+l Hamming distances between the received branch and each of the q possible encoder v+l branches, it is evident that the generator was used q + b - 1 times per source branch. By elementary calculation, one can observe that the random-number generator pro- gressed through many periods for each of the data runs listed in Table2 . Nevertheless, v+l the set of q distances simulated for a source branch was different, if not statistically independent, from all other such sets, provided that the random-number generator was 47 _ ____ _^1_1__ ... In PAGE 54: ...Thus the simulation program would have produced a periodic sequence of N(v, b) distinct sets of distances, where N(v, b) is equal to 109610 divided by the greatest common divisor of v+l 2 + b - 1 and 109610. None of the data runs listed in Table2 had a sample size exceeding N(v, b). While each set of distances generated by the program was dif- ferent, so that it is reasonable to expect that the data that was simulated provided a good approximation to actual decoder behavior, it is possible that the repeating sequence produced by the random-number generator accounted for some of the anomalies that will be described later.... In PAGE 57: ... . 5. 4.2 ANALYSIS OF DECODING ERRORS Table 4 lists additional properties not given in Table2 which further characterize the error patterns of data runs 10-36. In Table 4, the symbols v, R, p, and R/C are as defined previously, p apos; is the average decoding error probability, PB is the coding theorem error probability bound given by (7), N and L are the average number of errors in a burst and the average burst error length, respectively, and NB is the total number of error bursts occurring in the data run.... ..."

### Table 3. Viterbi Decoder results with several BER and Throughput Requirements (*A=Adaptive, F=Fixed)

2001

"... In PAGE 5: ... In this case, on average, using 4 high-resolution paths (M=4) results in a 64% improvement in BER while using 8 high-resolution paths (M=8) results in 82% improvement over pure hard-decision decod- ing. Table3 lists the results of several Metacore search outcomes using different parameter specifications. In each case, the BER and throughput were specified.... ..."

Cited by 6

### Table 3. Viterbi Decoder results with several BER and Throughput Requirements (*A=Adaptive, F=Fixed)

2001

"... In PAGE 5: ... In this case, on average, using 4 high-resolution paths (M=4) results in a 64% improvement in BER while using 8 high-resolution paths (M=8) results in 82% improvement over pure hard-decision decod- ing. Table3 lists the results of several Metacore search outcomes using different parameter specifications. In each case, the BER and throughput were specified.... ..."

Cited by 6

### Table 3. Viterbi Decoder results with several BER and Throughput Requirements (*A=Adaptive, F=Fixed)

2001

"... In PAGE 5: ... In this case, on average, using 4 high-resolution paths (M=4) results in a 64% improvement in BER while using 8 high-resolution paths (M=8) results in 82% improvement over pure hard-decision decod- ing. Table3 lists the results of several Metacore search outcomes using different parameter specifications. In each case, the BER and throughput were specified.... ..."

Cited by 6

### Table 1: Performance of the three Viterbi decoders

1999

"... In PAGE 5: ... The static power dissipation of cells was not considered due to the limitation of the library cells used in our experiments. Experimental results for three Viterbi decoders are shown in Table1 . Among the three designs, the register-exchange approach has the largest area and the proposed low-power design the least area.... ..."

Cited by 1

### Table 3: Viterbi Decoder Performance (ms)

"... In PAGE 4: ... The positive effect of DSP utilization can be already observed at the break-even point, since the host CPU can execute another tasks during the DSP operation. Table3 summarizes the run times for the Viterbi decoding algorithm on DSPs and AMD K6-3. The download time is omitted in this case, since the number of runs expected to be high.... ..."

### Table 3. Joint Optimization of Latency Time and Ordering

2005

"... In PAGE 7: ... For all subsequent latency times, only a B B matrix must be inverted. The algorithm for joint optimization can be seen in Table3 , its complexity is O(B3(Q + L)3). Special attention however must be paid to the problem of numerical error propagation.... In PAGE 7: ... 6. FIXED LATENCY TIME Simulations have shown that for most channel models the algorithm in Table3 returns WF = L in the overwhelming majority of channel realizations. We therefore investigated the effect of using a fixed latency time through extensive simulations.... ..."

Cited by 2