### Table 1: Preset parameters and their estimates by the BSOM and EM algorithm

"... In PAGE 4: ...2 Experiment (A) A Mixture of Three 2-D Gaussians The data samples used here are the same to those in [7]. The preset parameters of the mixture are given in Table1 . In total, 1000 data points were generated independently from this mixture.... In PAGE 5: ... The EM algorithm for this light-overlapping case has little difficulty in converging to these clusters, though occasionally it dose produce poor results for some random initial mean vectors. Typical results of both algorithms after 20 episodes are listed in Table1 . As can be seen that there is almost no difference on the final results for two methods.... ..."

### Table 1: An EM algorithm for MoG-based ICA

"... In PAGE 5: ... The EM algorithm for the mixture model in (7) is essentially a simple clustering algorithm with a complexity that grows linearly with respect to the number of sources. It can be implemented in exactly the same manner as the full MoG model ( Table1 ) with a restricted set of allowable indices. However, given the simplicity of (7), we are also able to make a further algorithmic improvement that speeds up convergence using an extension of EM called alternating expectation conditional maximisation (AECM) [22].... ..."

### Table 2: EM algorithm for the hit-miss mixture model.

2005

"... In PAGE 65: ...3 indicate that case based precision estimates may allow for more robust decision rules when the loss function is asymmetrical. Table2 shows that overarather large domain in the UCI mushrooms data set (five classifiers trained on different portionsofthedatabaseandeachappliedto100cases)theBayesianbootstrapbaseddecision rule has better specificity, and lower average loss for variable misclassification costs. A plausible explanation for this is the tendency of the naive Bayes classifier to probability overshoot , i.... ..."

### Table 6.2. WERs obtained using various linear features for the RM data set with different training and testing conditions. The number at the top of each column indicates the number of Gaussians per mixture used for the HMM state observation probabilities in training and testing. A1 through A8 are the new linear features generated from our algorithm with correspondingly 1, 2, 4, and 8 Gaussians per mixture used as observation probabilities.

2005

### Table 2: Average spectral distortion for different CSNR and different coders: (a) Gaussian Channel, (b) Slow-fading Rayleigh Channel.

"... In PAGE 3: ... Average spectral distortion is used as performance measure. Table2 shows results for the average spectral distortion (SD). In this table, row marked as CELP shows performance results when a scalar quantization is applied to the LSP parameters.... In PAGE 3: ... Rows marked as COMQ-21, COMQ-12, COMQ-6 and COMQ-0 show performance results for our technique in which COMQ quantization codebooks aretrained at a CSNR of 21, 12, 6 and 0 dB, respectively. From Table2 it can be observed that in general perfor- mance results of experiment CELP are worse than perfor- mance of others experiment, and the performance difference grows with an increment of the noise in the channel. The exception is when the training of quantization codebooks is done under a very noisy channel condition.... In PAGE 3: ... For example, at a CSNR of 6 dB, experi- ment COMQ-6 gives the best performance results compared to the others experiments, for both channel models. From Table2 it is clear that COMQ-X coders outperform other considered coders, specially for a noisy channel. For exam- ple, for a Gaussian Channel at a CSNR of 6 dB, COMQ-12 gives a 0.... ..."

### Table 2: Simulated preposterior risk for gmv = 1. All values are 10000 (Loss). The rst block is for the Gaussian-Gaussian model; the second for the Gaussian mixture prior assuming the mixture; the third for the Gaussian mixture prior, but with analysis based on a single Gaussian prior.

"... In PAGE 12: ... 9.2 Comparisons among loss function-based estimates Table2 reports results for ^ Pk, ~ Pk( ) and ^ ~ Pk( ) under four loss functions and for the \log-uniform quot; variance pattern. For the \two-clusters quot; pattern, di erences between estimators are modi ed relative to those for the log-uniform pattern, but preference relations are unchanged.... In PAGE 13: ... Similar relations among the estimators hold for the two component Gaussian mix- ture prior and for a \frequentist scenario quot; with a xed set of parameters and repeated sampling only from the Gaussian sampling distribution conditional on these parameters. Results in Table2 are based on gmv = 1. Relations among the estimators for other values of gmv are similar, but a look at extreme gmv is instructive.... ..."

Cited by 1

### Table 1. Notation for EM for mixture of dynamic textures

"... In PAGE 3: ...epends on hidden variables (i.e. there is missing data). For the dynamic texture mixture, the observed information is a set of video sequences {yi}, and the missing data consists of 1) the assignments of sequences to mixture components (the assignment of sequence yi to the jth mixture component is encoded by the state of the indicator variable z(j) i ), and 2) the hidden state sequence x(j) i for yi under component j (see Table1 for notation). The EM solution is found using an iterative procedure that alternates between estimating the missing information with the current parameters, and com- puting new parameters given the estimate of the missing in- formation.... ..."

### Table 4: Performance of the algorithm in presence of various sources of noise in mixtures: Normal- ized mean-squared (NSE) and cross-talk (CTE) errors for image separation, applying our multiscale adaptive approach along with the Natural Gradient based separation.

2003

"... In PAGE 20: ... Therefore, at sufficiently high signal-to-noise energy ratios (SNR), the large coefficients of the signals are only slightly distorted by the noise coefficients. As a result, the presence of noise has a minor effect on the estimation of the unmixing matrix (see the CTE entries in Table4 ). Note, that the NSE entries reflect the noise energy passed to the reconstructed sources from the mixtures.... ..."

Cited by 4

### Table 4: Performance of the algorithm in presence of various sources of noise in mixtures: Normal- ized mean-squared (NSE) and cross-talk (CTE) errors for image separation, applying our multiscale adaptive approach along with the Natural Gradient based separation.

2003

"... In PAGE 20: ... Therefore, at sufficiently high signal-to-noise energy ratios (SNR), the large coefficients of the signals are only slightly distorted by the noise coefficients. As a result, the presence of noise has a minor effect on the estimation of the unmixing matrix (see the CTE entries in Table4 ). Note, that the NSE entries reflect the noise energy passed to the reconstructed sources from the mixtures.... ..."

Cited by 4

### Table 1. Comparison of Expectation-Maximization Algorithm Estimates With Exact Maximum Likelihooda

2003

"... In PAGE 10: ...total number of nonmissing observations and p is the number of parameters estimated in the regression model. The parameter estimates and standard errors are reported in Table1 . The estimates of q1 and q2 are very comparable in the two approaches; that of a is somewhat smaller in the EM approach, as are all three standard errors.... In PAGE 10: ...3. For this the maximum likelihood estimates from Table1 were used. At the end of this procedure, C22yC1t is added to the prediction ^y*s0,t to obtain a prediction ^ys0,t; the actual predicted PM2.... ..."

Cited by 4