### Table 7. Constraint results for divergence retrieval model KL J J S K

### Table 7. Constraint results for divergence retrieval model KL J J S K

### Table 7. Constraint results for divergence retrieval model KL J J S K

### Table 7. Constraint results for divergence retrieval model KL J JS K

### Table 6: Transition semantics for annotated divergent choice

1996

Cited by 90

### Table 1: Symmetric KL divergence for pairs of authors

"... In PAGE 7: ... We searched only over authors who wrote more than 5 papers in the full NIPS data set|there are 125 such authors out of the full set of 2037. Table1 shows the 5 pairs of authors with the highest averaged sKL for the 400-topic model, as well as the median and min- imum. Results for the 200 and 100-topic models are also shown as are the number of papers in the data set for each author (in parentheses) and the number of co-authored papers in the data set (2nd column).... In PAGE 8: ...imilarly, although A. Moore and R. Sutton have not co-authored any papers to our knowledge, they have both (separately) published extensively on the same topic of reinforcement learning. The distances between the authors ranked highly (in Table1 ) are signi cantly lower than the median distances between pairs of au- thors. The topic distributions for di erent authors can also be used to assess the extent to which authors tend to address a single topic in their work, or cover multi- ple topics.... ..."

### Table 2. Comparison of MSQE and KL divergence for the three algorithms in the face data set. The standard deviations of MSQE and J over the Monte Carlo runs are not provided as they were negligible.

2004

"... In PAGE 4: ... The bias in the LBG towards the centroids can also be seen on the shoulder region, in terms of the excessive number of PEs. For a quantitative comparison, the MSQE and J(W) for VQKL, LBG, and SOM are provided in Table2 . As before, the LBG is better in terms of MSQE, while the VQKL outperforms the other two algorithms in terms of KL divergence.... ..."

Cited by 1

### Table 1: KL-divergence and component weights for the four component model.

1999

"... In PAGE 5: ... In the next experiment, the mixture parameters were calculated by minimizing the KL-distance and the SE and ESE distance measures using four and flve mixture components. Table1 and 2 show that for four or flve mixture components only three mixture components obtain weights of more than 1% for all approaches. Figure 2 and 3 plot qij (the probability for a positive flnding of node j in scenario i) for the solutions ob- tained by the three algorithms.... ..."

Cited by 1

### Table 1: Summary of the results presented in this paper. i 6 = j and A(i) = A(j), then B( (i)) 6 = B( (j)). That is, a particular pair of blocks cannot be placed into correspondence more than once. This allows us to keep the measure from diverging if a negative-cost pairing exists and the substring families do not have to be disjoint. More formally,

1997

"... In PAGE 6: ... In this paper, we examine the various cases, show which are hard, and give algorithms for those that are solvable in polynomial time. Table1 summarizes our results. 3 CD-CD Block Edit Distance is NP-complete In this section we show that if both substring families must be disjoint covers, the block edit distance problem is NP-complete.... In PAGE 12: ... This shows that CD-CD is NP-complete, completing the proof of the theorem. 2 To nish the last two hard entries in Table1 , we must make minor changes to the cost function. Theorem 3 The CD-CD and CD-CD block edit distance problems are NP-complete.... ..."

Cited by 44

### Table 1: Analytic examples of distance calculations for three common probability distributions. The calculated Kullback-Leibler distance is D(p1bardblp0) (important only for the asymmetric Poisson case). The Gaussian and Laplacian examples differed in mean only. All of these need to be divided by ln 2 to correspond to base-2 logarithms.

2001

"... In PAGE 6: ... This graphical depiction of these distance measures suggests that as the two Kullback-Leibler dis- tances differ more and more, the greater the discrepancy between the Chernoff distance from the J-divergence and the Bhattacharyya distance. Table1 shows some analytic examples. These examples illustrate some of the variations be-... ..."

Cited by 12