### Table 1. The initial parameter values needed by the EM algorithm

2003

"... In PAGE 4: ... Thus, we developed an automatic method for choosing them. The initial values of the parameters are set according to Table1 . Let ) (x h be the normalized observed histogram and ) (x hinit R and ) ( 1 x hinit G be the initial histograms of the Rayleigh and normal distributions, respectively, as defined by Eq.... ..."

Cited by 2

### Table 1: Experimental results. EM is the conventional EM with random initialization, and EMCS is the proposed EM with component splitting.

2003

"... In PAGE 7: ...1). Table1 -(a) shows the results over 30 trials with different random numbers. We use the on-line EM algorithm ([7]), presenting data one-by-one in a random order.... In PAGE 7: ... We use the mixture of PCA with 10 com- ponents of rank 4, and obtain a compressed image by ^ X = Fh(FT h Fh) 1FT h X, where X is a 64 dimensional block and h indicates the component of the shortest Euclidean distance kX hk. Table1 -(b) shows the residual square error (RSE), P400 j=1 kXj ^ Xjk2, which shows the quality of the compression. In both experi- ments, we can see the better optimization performance... ..."

Cited by 1

### Table 3. Adaptive Algorithm with Finite-Mixture for Noiseless ICA

1997

Cited by 19

### Table 1: Preset parameters and their estimates by the BSOM and EM algorithm

"... In PAGE 4: ...2 Experiment (A) A Mixture of Three 2-D Gaussians The data samples used here are the same to those in [7]. The preset parameters of the mixture are given in Table1 . In total, 1000 data points were generated independently from this mixture.... In PAGE 5: ... The EM algorithm for this light-overlapping case has little difficulty in converging to these clusters, though occasionally it dose produce poor results for some random initial mean vectors. Typical results of both algorithms after 20 episodes are listed in Table1 . As can be seen that there is almost no difference on the final results for two methods.... ..."

### Table 2: EM algorithm for the hit-miss mixture model.

2005

"... In PAGE 65: ...3 indicate that case based precision estimates may allow for more robust decision rules when the loss function is asymmetrical. Table2 shows that overarather large domain in the UCI mushrooms data set (five classifiers trained on different portionsofthedatabaseandeachappliedto100cases)theBayesianbootstrapbaseddecision rule has better specificity, and lower average loss for variable misclassification costs. A plausible explanation for this is the tendency of the naive Bayes classifier to probability overshoot , i.... ..."

### Table 2: EM algorithm for the hit-miss mixture model.

### TABLE 2 : SUMWRY OF EMSQ AND EM4D

1983

Cited by 2

### Table 10 Speaker identification: performance of the modified mixture of experts trained by different learning algorithms in the inner step of the EM algorithm

"... In PAGE 21: ... For comparison, we also adopted the IRLS algorithm and the BFGS algorithm in the inner loop of the EM algorithm, respectively, to train the modified ME clas- sifiers. Simulation results are also shown in Table10 . Simu- lation results show that the use of the proposed learning algorithm produces the best identification rates in general.... In PAGE 21: ...4026. In contrast, the performance of our approx- imation algorithm is similar to that of the BFGS algorithm in general, but the BFGS algorithm still does not yield significantly faster learning as shown in Table10 . Once again, the use of the IRLS algorithm still results in the poor performance.... ..."

### Table 9 Speaker identification: performance of the mixture of experts trained by different learning algorithms in the inner-loop of the EM algorithm

"... In PAGE 20: ...4425. In contrast to our approx- imation algorithm, the BFGS algorithm produces the similar performance, but does not lead to significantly faster learn- ing as shown in Table9 . In addition, the proposed learning algorithm and its approximation outperform the IRLS algo- rithm.... ..."

### Table 4: Mixture weights for the 3 models

2005

"... In PAGE 21: ...anually created, from our corpus, a held-out reference lexicon , denoted l, containing ca. 1,200 pairs. The E- and M-step formulas of the EM algorithm are then defined as follows: gt; lt; gt; lt; = gt;= lt; k t s t s k k i Ikst Iist i P s t P k P s t P i P Iist , , ) ( ) | ( ) ( ) | ( ) ( where lt;Iist gt; denotes the probability of choosing model i from the pair (s,t). Table4 below presents the mixture weights we obtained when model 2 is based on the complete n search (with n = 200 and n = 1) and the subtree search (with p = 20). ... ..."

Cited by 2