### Table 3. Adaptive Algorithm with Finite-Mixture for Noiseless ICA

1997

Cited by 19

### Table 6: Finite Sample Distribution of T

"... In PAGE 13: ... In Table 5 we present the nite sample means and sample variances of T n under H 0 for all three samples. We report the 95% critical value in Table6 and the power of the test in Table 7. We also perform a Monte Carlo study to obtain the real sizes of the test in nite samples and compare them with the nominal sizes.... In PAGE 13: ... A detailed examination of Table 3 and Table 5 reveals that the asymptotic distri- bution of T n is very close to the nite sample distribution of T n across all three samples and all nite-variance distributions. Not surprisingly, therefore, we end up the same conclusions from Table 4 and Table6 . Table 4 indicates that, for all three samples,... In PAGE 14: ... Finally, Table 7 provides the evidence that in nite samples our test has very good power. From Table6 andFigure 1, we note that in terms of the size of the test, it works quite well for the normal distribution, the Student t distribution, the mixture of normal distribution, the compound log-normal and normal model, and the Weibull distribution. Although the size distortions are larger for the mixed di usion jump model, the biases suggest under-rejection of the model and hence support our nding of rejection of all nite-variance distributions in the above empirical study.... ..."

### Table 3: Word error rates for corrupted speech enhanced adap- tively. The general models contain 128 mixture components in the speech state.

1998

"... In PAGE 3: ... This was then mapped to an insertion penalty. The first column of Table3 summarises the enhancement results obtained using this scheme. We see that substantial improve- ments have been made over baseline results and that the error rates are comparable to the matched model results in Table 1.... In PAGE 3: ... For this system, the forward-backward equations are used instead of Viterbi alignment to calculate the likelihood of each mixture com- ponent of the compensated model. The second column of Table3 summarises the results for this sys- tem. These results are inferior to the word-based system despite having a comparable number of mixture components.... ..."

Cited by 1

### Table 3: Word error rates for corrupted speech enhanced adap- tively. The general models contain 128 mixture components in the speech state.

"... In PAGE 3: ... This was then mapped to an insertion penalty. The first column of Table3 summarises the enhancement results obtained using this scheme. We see that substantial improve- ments have been made over baseline results and that the error rates are comparable to the matched model results in Table 1.... In PAGE 3: ... For this system, the forward-backward equations are used instead of Viterbi alignment to calculate the likelihood of each mixture com- ponent of the compensated model. The second column of Table3 summarises the results for this sys- tem. These results are inferior to the word-based system despite having a comparable number of mixture components.... ..."

### Table 3. Adaptive Algorithm with Finite-Mixture for Noiseless ICA Gaussian Mixture Derivative Sigmoid Mixture With old k xed, update Wnew = Wold + (I + (y)yT)Wold; (y) = [ 1(y1); ; k(yk)]T

1997

Cited by 19

### Table 2: EM algorithm for the hit-miss mixture model.

2005

"... In PAGE 65: ...3 indicate that case based precision estimates may allow for more robust decision rules when the loss function is asymmetrical. Table2 shows that overarather large domain in the UCI mushrooms data set (five classifiers trained on different portionsofthedatabaseandeachappliedto100cases)theBayesianbootstrapbaseddecision rule has better specificity, and lower average loss for variable misclassification costs. A plausible explanation for this is the tendency of the naive Bayes classifier to probability overshoot , i.... ..."

### Table 2: EM algorithm for the hit-miss mixture model.

### Table 4: Mixture weights for the 3 models

2005

"... In PAGE 21: ...anually created, from our corpus, a held-out reference lexicon , denoted l, containing ca. 1,200 pairs. The E- and M-step formulas of the EM algorithm are then defined as follows: gt; lt; gt; lt; = gt;= lt; k t s t s k k i Ikst Iist i P s t P k P s t P i P Iist , , ) ( ) | ( ) ( ) | ( ) ( where lt;Iist gt; denotes the probability of choosing model i from the pair (s,t). Table4 below presents the mixture weights we obtained when model 2 is based on the complete n search (with n = 200 and n = 1) and the subtree search (with p = 20). ... ..."

Cited by 2

### Table 2: Circuit Model in Finite Field Size GF(8)

"... In PAGE 6: ...nodes according on top the SMODD and optimized using ordering algorithm. Table2 presents the results for the circuits modeled in GF(8). The spatial complexity is presented in terms of node count and the speed is presented in terms of average path length and random pattern simulation.... ..."