### Table 3. Relative computational cost for the training and the Viterbi decoding processes.

"... In PAGE 4: ....4. Computation One of the main goals of using this approach has been to re- duce the required computation needed by HSMM duration modeling techniques. Table3 summarizes both training and decoding computational costs for the standard HMM, HSMM, and ES-HMMs with two to four-state expansions. It is seen that, though slower than the standard HMM, the training for any of the ES-HMMs is more e cient than the HSMM.... ..."

### Table 4: Look-up table for the outputs of the proposed Viterbi decoder.

2005

"... In PAGE 21: ... enablepmu: enables path metric calculation. The next state table, shown in Table4 , of the trellis for the (2,1,7) convolu- tional encoder is modeled as of type array, and the BMU receives the table as a parameter (metrics). The first column of the array contains the metric value if the assumed input bit is prime0prime, and the second one the value of the metric if the as- sumed input bit is prime1prime.... ..."

### Table 1: COVQ performance (SNR in dB) over channel 1. Results for rate 2, 1 and 0.5, 4-dimensional VQ for Gauss-Markov and iid Gaussian sources, for soft and Viterbi decoding. For the soft decoders = 1. The Viterbi decoders all use V = 40. Gauss-Markov iid Gauss

"... In PAGE 5: ...ver Viterbi is about 1.9 dB in CSNR. Also, as expected, the random IA plus Viterbi decoding performs very poorly, illustrating the importance of a good IA. Table1 shows the performance of di erent COVQ schemes, employing optimal and Viterbi decoding for Gauss-Markov (a = 0:9) and iid Gaussian sources. The performance was measured at the same CSNRs as for which the COVQs were trained (perfect match).... ..."

### Table 6.12: 3-gram Language Model;; Viterbi Decoding Results

2000

Cited by 19

### TABLE I Various measures of decoding time and accuracy for the ICP and Viterbi algorithms for four image models.

1996

Cited by 9

### Table 6.12: 3-gram Language Model; Viterbi Decoding Results

2000

### Table 4: Results of the system with model selection using MDL (with and without Viterbi decoding)

2007

### Table 2. Viterbi algorithm simulation data runs.

1970

"... In PAGE 53: ...pecified in section 3. 2. 1, we defined an error burst to be any segment of the decoded source sequence with the following properties: The sequence begins and ends with decoding errors; it contains no error-free subsequences of v or more consecutive sym- bols; and it is immediately preceded and followed by error-free intervals of at least v consecutive symbols. Table2 lists the data runs that were obtained from this simu- lation program. For simplicity, it was assumed in the computer program that the all-zero source sequence was supplied to the encoder, and that the all-zero source sequence produced the all-zero channel sequence.... In PAGE 53: ... From the description of the manner in which the computer simulated the v+l Hamming distances between the received branch and each of the q possible encoder v+l branches, it is evident that the generator was used q + b - 1 times per source branch. By elementary calculation, one can observe that the random-number generator pro- gressed through many periods for each of the data runs listed in Table2 . Nevertheless, v+l the set of q distances simulated for a source branch was different, if not statistically independent, from all other such sets, provided that the random-number generator was 47 _ ____ _^1_1__ ... In PAGE 54: ...Thus the simulation program would have produced a periodic sequence of N(v, b) distinct sets of distances, where N(v, b) is equal to 109610 divided by the greatest common divisor of v+l 2 + b - 1 and 109610. None of the data runs listed in Table2 had a sample size exceeding N(v, b). While each set of distances generated by the program was dif- ferent, so that it is reasonable to expect that the data that was simulated provided a good approximation to actual decoder behavior, it is possible that the repeating sequence produced by the random-number generator accounted for some of the anomalies that will be described later.... In PAGE 57: ... . 5. 4.2 ANALYSIS OF DECODING ERRORS Table 4 lists additional properties not given in Table2 which further characterize the error patterns of data runs 10-36. In Table 4, the symbols v, R, p, and R/C are as defined previously, p apos; is the average decoding error probability, PB is the coding theorem error probability bound given by (7), N and L are the average number of errors in a burst and the average burst error length, respectively, and NB is the total number of error bursts occurring in the data run.... ..."

### Table 1: Performance of the three Viterbi decoders

1999

"... In PAGE 5: ... The static power dissipation of cells was not considered due to the limitation of the library cells used in our experiments. Experimental results for three Viterbi decoders are shown in Table1 . Among the three designs, the register-exchange approach has the largest area and the proposed low-power design the least area.... ..."

Cited by 1

### Table 3: Viterbi Decoder Performance (ms)

"... In PAGE 4: ... The positive effect of DSP utilization can be already observed at the break-even point, since the host CPU can execute another tasks during the DSP operation. Table3 summarizes the run times for the Viterbi decoding algorithm on DSPs and AMD K6-3. The download time is omitted in this case, since the number of runs expected to be high.... ..."