### Table 4: Impact of Initialization Bias on Variance

1997

Cited by 1

### Table 3. Bias and variance of GLS estimator for

### Table 3: Impact of Initialization Bias on Variance

1997

Cited by 1

### Table 2: Impact of Initialization Bias on Variance

1997

Cited by 1

### Table 2: Average bias of the variance components.

2004

"... In PAGE 17: ...f3, f4 and f5. For each type of responses, Table2 compares averages of estimates with the empirical variances, obtained from the 31 i.... ..."

Cited by 4

### TABLE I Bias and Variance of the Three Local Estimators of Chirp Rate

2000

Cited by 1

### Table 3. Bias and Variance Decomposition of the Error for Archived Data Dataset

"... In PAGE 6: ...e., that obtained from the estimate by subtracting a part of the Bayes error rate (see Equation 8)) could greatly di er from that estimated, which is high (see Table3 ) or be comparable in size with the variance. Second, in the experiments performed on synthesized data sets, where the Bayes error rate is known and the bias/variance decomposition is exactly computable, we observed similar behaviors in the changes of bias and variance with respect to the nearest neighbor classi er.... In PAGE 6: ... 4.1 Archived Data Sets The results of the experiments conducted on archived data are shown in Table3 . IB1 is an implementation of the nearest neighbor classi er [Aha, 1992]) and is used as a baseline.... In PAGE 7: ... Only on CLOUDS98 did IB1ecoc yield higher errors than IB1opc, because the class encoding space de ned by only four classes is too small to generate su ciently distinctive codewords.3 gt;From the results, shown in Table3 , we conclude that ECOCs drastically reduce the bias component of the error at the cost of increasing the variance. Bias is always reduced from a minimum of 18% (CLOUDS99) to a maximum of 44% (CLOUDS204).... In PAGE 8: ...e., compare CLOUDS204-100 in Table3 (100 training instances) with CLOUDS204 (500)), this did not decrease IB1ecoc apos;s variance. However, increasing codeword length does appear to reduce IB1ecoc apos;s variance, as exempli ed in Figure 1, which plots the bias and total error of IB1ecoc on CLOUDS204 (using only 100 instances) with di erent codeword lengths.... ..."

### Table 4. Bias and Variance Decomposition of the Error for Synthesized Data Dataset

"... In PAGE 6: ... This follows from Equation 10 and because all the addenda are pos- itive. Moreover, the nearest neighbor classi er is known to have high bias [Breiman, 1996b], as the decomposition of the error on arti cial data sets shows (see Table4 ). Therefore, it is unlikely that the true bias on real data sets (i.... In PAGE 9: ...re seven boolean features and ten classes. The Bayes optimal error rate is 0.274. Table4 shows the decomposition of the error for the same set of algorithms that were applied to the archival data sets. The behavior of IB1ecoc on the synthetic data sets is analogous to what we observed on the archival data; IB1ecoc recorded lower error rates than IB1 by reducing bias and increasing variance.... ..."

### Table 1: Squared bias, variance, and mean squared error in the two asymmetric situations. The falling arrows after the code of the estimator indicate that bias or variance, resp., goes down with the more distant contamination.

"... In PAGE 7: ... Thus, for each estimator and situation, we obtain two numbers, the bias b (called \Av- erage quot; in the book) and the variance. Table1 gives squared bias b2, variance var and their sum, the mean squared error (MSE). It also indicates when bias and variance, respec- tively, go down with increasing distance of the contamination; obviously, the former are essentially the redescenders (including rejection rules), while only rarely also the variance does go down.... In PAGE 7: ... Tukey 1970/71, 1977), I might mention that I presently prefer and use the Vietnamese way of tallying (4 sides of a square and one diagonal); there is also a Japanese way of tallying with 5 lines, which is closely related to the Kanji character for \stop quot; (a top line and the character). Table1 and Figure 3 show that there are large di erences in variance and large di er- ences in bias (both predictable by the in uence function!), but much smaller di erences in MSE. It may be noticed that for 2N(2,1), there is only one estimator with small (but... In PAGE 10: ...320 4.324 Table1 : Continued.... ..."

### Table 2. Example of estimation of bias/variance measures for three cases from ten n-fold cross-validation trials Case 1 Case 2 Case 3

2000

"... In PAGE 31: ... In this situation biasKD gt; 0:5 and error lt; biasKD. See, for example, Case 2 of Table2 . As a consequence, varianceKD will be negative.... ..."

Cited by 55