### Table 8: Semantic similarity distances of various words from the word astronaut , as measured by normalized skew divergence of Wordnet-based unigram mixture models.

2002

"... In PAGE 9: ... The final score is a real number between zero (identical match) and an arbitrary upper bound of 500 (maximum dissimilarity). Table8 shows scores for the word astronaut compared to various words. Note that words like orbit share similar co-occurrence distributions with astronaut but, correctly, do not get low translation distance scores.... ..."

Cited by 16

### Table A.6: Pooled Regression Analysis - Analysis of Convergence or Divergence of Food

### Table A.7: Pooled Regression Analysis - Analysis of Convergence or Divergence in the

### Table 2. Predicted Retrotransposon Insertion and Divergence Times

"... In PAGE 6: ... The Opie ret- roelements (Opie-B, Opie-C, and Opie-D) that are part of the duplicated 43-kb segment were estimated to have inserted within the last 1.5 million years ( Table2 ). The Grande ele- ment in Zm238E11 seems to be the most recent insertion, because its long terminal repeats are still identical.... In PAGE 6: ... Comparison of the Opie elements within the 43-kb duplicated regions of Zm163K15 and Zm238E11 indicated that they began to diverge from each other within the last 0.2 million years ( Table2 ), consistent with the diver- gence time predicted by the one-nucleotide difference be- tween rp1-2 and rp1-4. Based on these results, we con- clude that the duplication of the 43-kb maize segment occurred within the last 0.... ..."

### Table 2: Further results for ex using single precision, showing the divergence of the error bound for small x

1997

### Table 1: Some common univariate Bregman divergences DF .

709

"... In PAGE 7: ... More importantly, the notion of Bregman divergence encapsulates various information measures based on entropic functions such as the Kullback-Leibler divergence based on the (unnormalized) Shannon entropy, or the Itakura-Saito divergence based on Burg entropy (commonly used in sound processing). Table1 lists the main univariate Bregman divergences. 2.... In PAGE 10: ...Table1 . Both the squared function F (x) = x2 and Burg entropy F (x) = log x are self-dual, i.... ..."

### Table 1: Information inequality and absolute divergence of an approximated example belief network.

1997

"... In PAGE 17: ... Therefore, this graphically portrayed dependence can be rendered redundant and arc V8 ! V9 can be removed without introducing an error in the probability distribution since I(Pr; PrV86!V9) = 0 as shown in Figure 2. Table1 gives the upper bound provided by the information inequality and the absolute divergence of the approximated joint probability distributions after removal of various linear subsets of arcs A from the network apos;s digraph. The table is compressed by leaving out all linear sets containing arc V8 ! V9 (except for the set fV8 ! V9g) because the second and third column are both unchanged after leaving out this arc.... ..."

Cited by 16

### Table 1: A list of Bregman divergences and the corresponding convex functions.

"... In PAGE 4: ...(2) under all the Bregman divergences. Table1 shows a list of Bregman divergences and their corresponding Bregman convex functions. Note that Bregman divergences are non- negative.... In PAGE 7: ...uction. In [21], the model is based on Euclidean distance. Euclidean distance function has very wide applicability, since it implies the normal distribution and most data with a large sample size tend to have a normal distribution. However, since Bernoulli distribution is a more intuitive choice for the binary data, RSN-BD directly provides a new algorithm for clustering binary data with feature reduction by using lo- gistic distance function (see Table1 ), which corresponds to Bernoulli distribution. 5.... ..."

### Table 3. DCF/EER results for migration based on probabilities versus the KL divergence

2004

"... In PAGE 3: ... The least overall loss with respect to the ideal baseline occurs with migrating the largest (2048) to any other smaller system. Table3 compares the migration algorithm using observa- tion probability to compute softcounts as in (2) with using the alternative symmetric KL divergence as in (7) on two selected migration cases. Although the KL-based migration outperforms the standard observation probability in the un- normalized case, the same does not hold when applying the T-Norm.... ..."

Cited by 1