### Table 1. Non-Gaussian Noise Distributions

2000

"... In PAGE 3: ... There are several well known non-Gaussian noise dis- tributions [4, 6] satisfying Theorem 3. These distributions are listed in Table1 , where again ?(:) is the Gamma func- tion, a is a scale parameter related to the common variance 2 of the quadrature components, and is a shape param- eter ruling the rate of decay of the noise pdf. The general- ized Cauchy reduces to the Gaussian distribution in the case ! 1.... ..."

Cited by 3

### Table 4: Experiments for non-Gaussian variations and nonlinear delay.

2007

"... In PAGE 5: ... We compare the solution quality of n2SSTA with the golden Monte Carlo simulation of 100,000 runs. Similar to the experiment setting in [12], Table4 com- pares n2SSTA and Monte Carlo simulation in terms of the ratio between sigma and mean, the 95% yield timing, and runtime in second. In the rst (or second) set of experi- ments, all variation sources follow a uniform (or a tri-angle) distribution.... In PAGE 6: ... This clearly shows that n2SSTA is not only more general, but also more accurate than linSSTA. Note that n2SSTA has a larger error for Gaussian variation sources in Table 5 than for uniform or triangle variation sources in Table4 , and this is because n2SSTA needs bigger bounds (10) for Gaussian variations than for uniform or triangle variations. Interestingly, we nd that both approaches pre- dict the 95% yield point well.... ..."

Cited by 1

### Table 5.4: Parameters of double-half normal distributions of non-Gaussian noise.

### Table 4: ARMA(2,3) Covariance Matrix using the Non-Gaussian Parameter Estima- tion Process

1996

"... In PAGE 9: ... This con rms the anticipated results exhibited by equation 30 and 31 which matches the simulated parameter values under the two di erent assumptions. Table4 provides the covariance matrix which indicates the interrelationships of the parameters under the power distribution assumption.... ..."

Cited by 1

### Table 4: ARMA#282,3#29 Covariance Matrix using the Non-Gaussian Parameter Estima-

1996

"... In PAGE 9: ... This con#0Crms the anticipated results exhibited by equation 30 and 31 which matches the simulated parameter values under the two di#0Berent assumptions. Table4 provides the covariance matrix which indicates the interrelationships of the parameters under the power distribution assumption.... ..."

Cited by 1

### Table 13: Amplitude of Non-Gaussianity

2003

"... In PAGE 64: ... (2003) for details of weighting method.) Given the uncertainties in the source cut-off and the luminosity function, the values for bsrc in Table13 are consistent with the values of cps in Hinshaw et al. (2006).... In PAGE 64: ... (2006). Table13 lists the measured amplitude of the non-Gaussian signals in the 3 year maps.... ..."

Cited by 9

### Table 13: Amplitude of Non-Gaussianity

2003

"... In PAGE 64: ... (2003) for details of weighting method.) Given the uncertainties in the source cut-off and the luminosity function, the values for bsrc in Table13 are consistent with the values of cps in Hinshaw et al. (2006).... In PAGE 64: ... (2006). Table13 lists the measured amplitude of the non-Gaussian signals in the 3 year maps.... ..."

Cited by 9

### Table 1: Examples of possible BCH codes and the corresponding FNMR and FMR results for non- Gaussian and Gaussian assumptions.

"... In PAGE 9: ... With 255 bits codewords, only discrete set of the secret code length s and the correctable errors length e is possible. Several examples and their corresponding bit error rate(BER), FNMR and FMR is given in Table1 . The FNMR under the assumption of non-Gaussian distribution is significantly better than under the assumption of Gaussian distribution,... ..."

### Table 1: Results for synthetic non-Gaussian data and the handwritten digits dataset. Each non- Gaussian dataset contains 4000 points in 8 dimensions sampled from 20 true clusters each having uniform distribution. The eccentricity and c-separation are both 4. We run each algorithm except BKM on ten such datasets, and BKM on one. The digits dataset consists of 10 classes and 9298 examples.

2007

"... In PAGE 7: ...on one of them. The results are shown in the left part of Table1 . G-means and X-means overfit the non-Gaussian datasets, while PG-means and BKM both perform excellently in the number of clusters learned and in learning the true labels according to the VI metric.... In PAGE 7: ... We use random linear projection to project the dataset to 16 dimensions and run PG-means, G- means, X-means, and BKM on it. The results are shown in the right side of Table1 . PG-means gives 14 centers, which is closest to the true value.... ..."

Cited by 1