### Table 4.1: Comparison of magnitude allocation M and MSE D = ?=N for a pair of IID Gaussian random variables. L2vq is loss relative to optimal xed-rate two-dimensional VQ and is decomposed as the sum of square loss Lsq, oblongitis loss Lob and point density loss Lpt.

### Table 4. Comparison between SVM, JTSVM, rTSVM and DA (all with quadratic hinge loss (l2)). For each method, the top row shows mean error rates with model selection; the bottom row shows best mean error rates. u/t denotes error rates on unlabeled and test sets. Also recorded in performance of DA with squared loss (sqr). usps2 coil6 pc-mac eset2

2006

"... In PAGE 7: ... This experimental setup neutralizes any undue advantage a method might receive due to di er- ent sensitivities to parameters, class imbalance issues and shortcomings of the model selection protocol. Comparing DA, JTSVM and rTSVM Table4 presents a comparison between DA, JTSVM and rTSVM. The baseline results for SVM using only labeled examples are also provided.... In PAGE 7: ... Being a gradi- ent descent technique, rTSVM requires loss functions to be di erentiable; the implementation in (Chapelle amp; Zien, 2005) uses the l2 loss function over labeled examples and an exponential loss function over unla- beled examples. The results in Table4 for DA and JTSVM were also obtained using the l2 loss. Thus, these methods attempt to minimize very similar ob- jective functions over the same range of parameters.... In PAGE 8: ... Here, too, annealing gives signi cantly better results. Performance with Squared Loss In Table4 we see that results obtained with the squared loss are also highly competitive with other methods on real world semi-supervised tasks. This is not surprising given the success of the regularized least squares algorithm for classi cation problems.... ..."

Cited by 8

### Table 1: Comparison of magnitude allocation M and MSE D = ?=N for a pair of IID Gaussian random variables. L2vq is loss relative to optimal xed-rate two-dimensional VQ and is decomposed as the sum of square loss Lsq, oblongitis loss Lob and point density loss Lpt. be decomposed into the product of point density and cell shape losses. The former isolates the loss caused by the suboptimal point density of Cartesian quantization, while the latter isolates the e ect of suboptimal inertial pro le. Since when optimized constant width PQ and Cartesian quantization have the same point density and the same MSE, it follows that they incur the same point density loss and the same total loss. Since total loss is the product of point density and cell shape losses, we see that constant width PQ and Cartesian quantization also have the same cell shape loss, even though their inertial pro les are quite di erent: constant width PQ has a spherically symmetric inertial pro le, whereas Cartesian quantization does not. For the important special case of IID Gaussian random variables, M and ? are given in

1998

"... In PAGE 8: ... Since total loss is the product of point density and cell shape losses, we see that constant width PQ and Cartesian quantization also have the same cell shape loss, even though their inertial pro les are quite di erent: constant width PQ has a spherically symmetric inertial pro le, whereas Cartesian quantization does not. For the important special case of IID Gaussian random variables, M and ? are given in Table1 for various values of . We do not have a method for optimizing power law PQ over the choice of , but have found by trial-and-error that the best choice of for a Gaussian... In PAGE 9: ... Note also that M is largest at = 0:8 and decreases slowly as ! 0. Table1 also gives the loss of power law PQ relative to optimal two-dimensional VQ. This loss, expressed in dB, is then decomposed into the sum of square loss Lsq, oblongitis loss Lob and point density loss Lpt [18].... ..."

Cited by 2

### Table 2: Squared Loss Comparison

2006

Cited by 4

### Table 2. Loss rate (%)

"... In PAGE 6: ... Table2 presents the median loss ratio as calculated by the receivers. One can notice that loss ratio increases along the ALM paths, as the number of ALM overlay links in- creases.... ..."

Cited by 1

### Table 2. Summary of Squaring Loss Formulas

"... In PAGE 13: ... They should: (1) provide full-wavelength L2 carrier phase measurements without half wavelength ambiguities; and (2) minimize the squaring loss as much as possible. Table2 summarizes the squaring loss formulas developed for the various approaches discussed in this paper, and indicates whether full or half wavelength carrier phase measurement is available. The squaring technique has the largest squaring loss, and the MAP approach has the minimum squaring loss among all approaches.... ..."

### Table 1: Sample complexity for learning with squared loss.

1998

"... In PAGE 2: ... An agnostic learning algorithm can also be used to learn the best approximation to the target function when the target function is not in the class. Table1 shows some of the known results for learning with squared loss. (The technical conditions such as pseudo-dimension and covering number are described in Section 2.... ..."

Cited by 31

### Table 1: Sample complexity for learning with squared loss.

1998

"... In PAGE 1: ... An agnostic learning algorithm can also be used to learn the best approximation to the target function when the target function is not in the class. Table1 shows some of the known results for learning with squared loss. (The technical conditions such as pseudo-dimension and covering number are described in Section 2).... ..."

Cited by 31