### Table 1. Collusion and Classification Based Steganalysis.

"... In PAGE 9: ... We use cross validation13,15 to determine the video set which would yield the lowest probability of false positive and false negative. Table1 summarizes the overall steganalysis method that incorporates linear collusion and classification. Table 1.... ..."

### Table 1: Markov Model Entropy Estimates

1998

Cited by 6

### TABLE I MARKOV MODEL ENTROPY ESTIMATES

1998

Cited by 6

### Table 1: The coverage and purity of rules generated by simple compression, tree building, and tree building with entropy- based rule selection.

### Table 2: Wavelets coefficient entropy and compression rates for SPIHT based and context based coding. All values are indicated in bits/sample.

"... In PAGE 4: ... The entropy of coefficients in each subband for every decomposition level is evaluated according to the frequency of occurence and the overall entropy is formed by properly weighting the entropies in each subband: C0 BP BD BE C6 C4 C0 C6 C4 C0C8 B7 C6 C4 CG D0BPBD BD BE D0 C0 D0 C4C8 (8) where C0 D0 D7CQ denotes the entropy of wavelet coefficients at decomposition level D0, D7CQ corresponding to low-pass (C4C8) or high-pass (C0C8) subband and C6 C4 is the number of de- composition levels. The entropy is compared with real code lengths obtained by using context based and then embed- ded coding ( Table2 ). The results are also compared with FSML-PD algorithm [5].... ..."

### Table 2: Wavelets coefficient entropy and compression rates for SPIHT based and context based coding. All values are indicated in bits/sample.

"... In PAGE 4: ... The entropy of coefficients in each subband for every decomposition level is evaluated according to the frequency of occurence and the overall entropy is formed by properly weighting the entropies in each subband: C0 BP BD BEC6C4 C0C6C4 C0C8 B7 C6C4 CG D0BPBD BD BED0 C0D0 C4C8 (8) where C0D0 D7CQ denotes the entropy of wavelet coefficients at decomposition level D0, D7CQ corresponding to low-pass (C4C8) or high-pass (C0C8) subband and C6C4 is the number of de- composition levels. The entropy is compared with real code lengths obtained by using context based and then embed- ded coding ( Table2 ). The results are also compared with FSML-PD algorithm [5].... ..."

### Table 3: Markov sources: (a) model probabilities and (b) estimated entropies.

1990

"... In PAGE 3: ... Entropy In orderto assess the relative importance ofthe word and sub- word units, the entropy of corresponding Markov sources were calculated. The probabilities used for each source are shown in Table3 a, where wi, si, vi, and ai are respectively a word, syllable, vowel, and phone, and ck is a string of consonants. A memoryless source was used to model the phone, word, and syllable sources.... In PAGE 3: ...Table 3: Markov sources: (a) model probabilities and (b) estimated entropies. Table3 b summarizes the results of the models in bits/phone. The lowest entropies are found for the word and triphone sources, indicating that their models store the most informa- tion.... ..."

Cited by 19

### Table 4: Markov sources: (a) model probabilities and (b) estimated entropies.

1993

"... In PAGE 8: ...Entropy In order to assess the relative importance of the word and subword units, the entropy of corresponding Markov sources were calculated. The probabilities used for each source are shown in Table4 a, where wi, si, vi, and ai are respectively a word, syllable, vowel, and phone, and ck is a string of consonants. A memoryless source was used to model the phone, word, and syllable sources.... In PAGE 8: ...Table 4: Markov sources: (a) model probabilities and (b) estimated entropies. Table4 b summarizes the results of the models in bits/phone. The lowest entropies are found for the word and triphone sources, indicating that their models store the most infor- mation.... ..."

Cited by 6

### Table 7: Average rates over sequences of motion-based segmentations encoded using the three lossless methods. Starting points of chains are not accounted for. The entropy is an estimate of the entropy rate under the assumption that the process is stationary and rst-order Markov.

### Table 5 presents the entropy, in terms of bpp (bit per pixel). It shows the original 3-D image set entropy (the entropy of V(x, y, z)), the entropy of the differences d(x, y, z), the entropy of the predicted integer wavelet transformed image set and the average compression expressed as the average number of bits per pixel of the compressed 3-D image set applying compress,

"... In PAGE 11: ...4, 4) 2.9686 2.7224 3.0141 2.9738 Table5 Comparison of entropy 7. CONCLUSION The dependencies (set redundancy) existing between the pixel intensities in three dimensions based on the histograms, wavelet decomposition coefficients plots, feature vectors, entropy and correlation were exploited.... ..."