### Table 3 shows the EER obtained for different baseline sparse structures. SEM is structural EM. The first column is zeroing the pairs with the minimum values of the corresponding criterion and the second column is zeroing the pairs with the maximum values. The second column is more of a consistency check. If the min entry of criterion A is lower than the min entry of criterion B then the max entry of criterion A should be higher than the max entry of criterion B. By observing Table 3 we can see improved results from the full-covariance case but results are not better than the diagonal-covariance case. For the structural EM, pruning step sizes of 50 and 100 were tested and no difference was observed.

"... In PAGE 7: ... Table3 : EER for different sparse structures, left number is for 30 second test utterances and right number for 3-second. 5 Conclusions In this work the problem of estimating sparse regression matrices of mixtures of Gaussians was addressed.... ..."

### Table 8. Comparisonbetweenthe nondense resultsfromBarron et al. (1994), Weber and Malik (1995), Ong and Spann (1999) and our results for the Yosemite sequence with cloudy sky. AAE = av- erage angular error. CLG = average angular error of the 3-D CLG method with the same density. The sparse flow field has been created usingourenergy-basedconfidencecriterion.Thetableshowsthatus- ing this criterion clearly outperforms all results in the evaluation of Barron et al.

2005

"... In PAGE 16: ... (h) Ditto for the sequence with clouds. A quantitative evaluation of our confidence measure is given in Table8 . Here we have used the energy- based confidence measure to sparsify the dense flow field such that the reduced density coincides with den- sities of well-known optic flow methods.... In PAGE 17: ... (b) Right: Locations where the angular error is lowest (20% quantile). In Table8 we also observe that the angular error decreases monotonically under sparsification over the entire range from 100% down to 2.4%.... ..."

Cited by 38

### Table 8. Comparisonbetweenthe nondense resultsfromBarron et al. (1994), Weber and Malik (1995), Ong and Spann (1999) and our results for the Yosemite sequence with cloudy sky. AAE = av- erage angular error. CLG = average angular error of the 3-D CLG method with the same density. The sparse flow field has been created usingourenergy-basedconfidencecriterion.Thetableshowsthatus- ing this criterion clearly outperforms all results in the evaluation of Barron et al.

2005

"... In PAGE 16: ... (h) Ditto for the sequence with clouds. A quantitative evaluation of our confidence measure is given in Table8 . Here we have used the energy- based confidence measure to sparsify the dense flow field such that the reduced density coincides with den- sities of well-known optic flow methods.... In PAGE 17: ... (b) Right: Locations where the angular error is lowest (20% quantile). In Table8 we also observe that the angular error decreases monotonically under sparsification over the entire range from 100% down to 2.4%.... ..."

Cited by 38

### Table 1: Dense and sparse data sets used in comparison.

"... In PAGE 5: ... The data sets were standardized to make each column of A have zero mean and unit variance. Table1 summarizes the statistics of the data sets used in comparison. Stopping criteria MOSEK uses the duality gap as the stop- ping criterion like our method.... ..."

### Table 2: An added criterion for almost C1P

2003

"... In PAGE 23: ... Thus, the extended definition should be treated as a rule of thumb only. Still, Table2 shows that choosing c = 2 classifies the problems of Section 6 properly. Another field of research is motivated by results in [Ruf02] showing that some sparse matrices can be transformed to almost have the C1P via column permutation.... ..."

Cited by 2

### Table 1 shows small improvements for 15 training sentences by using minDMI than the mixture of full Gaussians. For 5 training sentences all the sparse structures seem to be around the same, and equal to the full case. Interestingly enough, if we estimate the structure with 15 sentences but do the training with 5 sentences then we see a clear advantage of minDMI over the baseline. This shows that the structure- nding criterion is valid but also that estimates of mutual information are strongly dependent on the amounts of training data available.

"... In PAGE 4: ... The metric used in all experiments was Equal Error Rate. In Table1 the best results are reported for di erent con gurations. DMI stands for Di erence Mutual Information.... In PAGE 4: ...7 10.1 training from 5 sents Table1 : EER for di erent sparse structures selected using mutual information criteria.... ..."

### Table 7: Evaluation of sparse trigram probabilities using the numeric approach.

"... In PAGE 20: ... This is obtained from the solutions of The MAXENT criterion (with the 8 constraints) problem: eq1 := x1 x7 8 + x1 x4 x6 8 + x1 x3 x6 8 + x1 x3 x4 x6 x4 8 + x1 x2 x6 8 + x1 x2 x4 x5 8 + x1 x2 x3 x5 x4 8 + :001 = 1 eq2 := x1 x2 x6 8 + x1 x2 x4 x5 8 + x1 x2 x3 x5 x4 8 + :001 = :250 eq3 := x1 x3 x6 8 + x1 x3 x4 x6 x4 8 + x1 x2 x3 x5 x4 8 + :001 = :200 eq4 := x1 x4 x6 8 + x1 x3 x4 x6 x4 8 + x1 x2 x4 x5 8 + :001 = :400 eq5 := x1 x2 x3 x5 x4 8 + :001 = :010 eq6 := x1 x3 x4 x6 x4 8 + :001 = :010 eq7 := x1 x2 x3 x4 x5 x6 x7 x8 = :001 eq8 := 7 x1 x7 8 + 6 (x1 x4 x6 8 + x1 x3 x6 8 + x1 x2 x6 8) + 4 (x1 x3 x4 x6 x4 8 + x1 x2 x3 x5 x4 8) + 5 x1 x2 x4 x5 8 + :001 = h(A) solve(eq1,eq2,eq3,eq4,eq5,eq6,eq7,eq8) fx8 = x8, x7 = 2:234567901 x8, x3 = :6306620209 x8, h(A) = 6:129000000, x2 = :4285714286 x8, x4 = :9512195122 x8, x6 = :05227369316 x8, x1 = :2870000000x7 8 , x5 = :1160220994 x8g Finally, the coe cients h i of the linear decomposition of P ((wi; ti) j (wi?1; ti?1); (wi?2; ti?2)) can be written as: 8 gt; gt; gt; lt; gt; gt; : h i = ?1 :10e?1 = ?100 8i = 1; ::; 3 h 4 = ?:20 :10e?1 = ?20 h 5 = 0 Kh = 7?6:12900000000?:10e?1 :10e?1 = 86:10000000000 : (44) We have computed an estimation of the three sparse events using the numerical and symbolic approach. The results are illustrated in Table7 and Table 8. 7 Conclusion and Future Work A novel approach based on probabilistic logic is proposed to solve the problem of sparse events (or con gurations).... ..."

### Table 4: EER for different sparse structures, left number is for 30 second test utterances and right number for 3- second.

"... In PAGE 4: ... All structure-finding experiments are with the same num- ber of components and percent of regression coefficients pruned. Table4... In PAGE 5: ...Table 4: EER for different sparse structures, left number is for 30 second test utterances and right number for 3- second. From Table4 we can see improved results from the full-covariance case but results are not better than the diagonal-covariance case. All criteria appear to perform similarly.... In PAGE 5: ... All criteria appear to perform similarly. Table4 also shows that zeroing the regression coefficients with the maximum of each criterion func- tion does not lead to systems with much different perfor- mance. Also from Table 3 we can see that randomly ze- roing regression coefficients performs approximately the same as taking the minimum or maximum.... ..."

### Table 2. The selected 26 genes in Colon cancer data. Index denotes the serial number of the selected gene in the original data. Hits is the number of hits criterion used in our algorithm. Rank denotes the rank in the p-values of Wilcoxon rank sum test. SLR (8) denotes the rank in the 8 genes selected by the sparse logistic regression algorithm of [Shevade and Keerthi, 2003]. RFE (7) denotes the rank in the 7 genes selected by recursive feature elimination using the support vector machines of [Guyon et al., 2002].

2005

"... In PAGE 4: ... A lower LOO error can be achieved using the gene rank of our algorithm, although this may involve using more genes than when using the p-value rankings on the colon and leukaemia data. The selected genes are listed in Table2 - 4 with more des- criptions. In Table 2, we found that all the 8 genes selected by Shevade and Keerthi [2003] and 6 of 7 genes selected by Guyon et al.... In PAGE 4: ... The selected genes are listed in Table 2 - 4 with more des- criptions. In Table2 , we found that all the 8 genes selected by Shevade and Keerthi [2003] and 6 of 7 genes selected by Guyon et al. [2002] are also in our list.... ..."

### Table 1. Parallel execution times for adaptive sparse grids. A 3D convection- di usion problem is solved and the solution times in seconds on Parnass2 are given.

"... In PAGE 5: ... We consider adaptively re ned sparse grids for a problem with singular- ities, where the sparse grids are re ned towards a singularity located in the lower left corner. Table1 depicts wall clock times in the adaptive case. Due to the solution-dependent adaptive re nement criterion, the single processor version contained slightly more nodes, indicated by .... ..."