### Table 1. Comparisons of accuracies of linear and kernel methods on FERET face database.

2006

"... In PAGE 6: ...2D-KPCA-1; (d) B2D-PCA-2 vs. B2D-KPCA-2. It can be seen from Fig. 1 that except KPCA, the other three kernel methods outperform the corresponding linear methods greatly. Table1 gives comparison of accuracies of linear and kernel methods on FERET database, including results of three recent methods for single image face recognition on the same database. And Table1 also shows that except KPCA, the other three kernel methods proposed in this paper outperforms much better than the corresponding linear methods.... In PAGE 6: ... Table 1 gives comparison of accuracies of linear and kernel methods on FERET database, including results of three recent methods for single image face recognition on the same database. And Table1 also shows that except KPCA, the other three kernel methods proposed in this paper outperforms much better than the corresponding linear methods. Then why the kernel methods perform better on 2D patterns than on 1D patterns? We guess one reason maybe that the 2D representations in some sense enlarge the size of samples through treating each rows or columns of images as individual sam- ples, and hence the image covariance matrix in kernel-induce feature space is more accurately evaluated than in 1D representation where each class has only single sam- ple.... ..."

Cited by 1

### Table 1. Flow of the kernel MaxEnt procedure. There are two possible outputs; the input space kernel matrix Ky, and the kernel space data set y.

"... In PAGE 3: ... In terms of the eigenvectors of the kernel feature space correlation matrix, we project (xi) onto a subspace spanned by different eigenvectors, which is possibly not the most variance preserving (remember that variance in the kernel feature space data set is given by the sum of the largest eigenvalues). The kernel MaxEnt procedure, as described above, is summarized in Table1 . It is important to realize that kernel MaxEnt outputs two quantities, which may be used for further data analysis.... ..."

### Table 2.1 Overview of commonly used kernel functions and the dimension of the according feature space F with respect to the input space dimension n.

### Table 1. Classification performance in different feature space

"... In PAGE 5: ... The performance is evaluated by using the log likelihood of image sequence in KFD space. Table1 shows the performance of the proposed method. Although the image sequence of untrained views are used for test, the high recognition rate is obtained.... In PAGE 5: ... In the case of LD space, the perfor- mance is evaluated while changing the dimension of LD space. Table1 shows the per- formance of LD space. When LD analysis is used to construct the discriminant space, the best recognition rate is 68%.... In PAGE 5: ... Since LD analysis can not represent non-linear varia- tion induced by view changes well, the performance becomes low. The performance of original intensity feature space is also shown in Table1 . The performance of original intensity feature is very low by the influence of view changes.... ..."

### Table 1. Types of kernel functions

2000

"... In PAGE 3: ...) , ( i x x K G114 G114 . Table1 shows three typical kernel functions [8]. An optimal hyperplane is constructed for separating the data in the high-dimensional feature space.... ..."

Cited by 23

### Table 1. Types of kernel functions

2000

"... In PAGE 3: ... If the two classes are non-linearly separable, the input vectors should be nonlinearly mapped to a high- dimensional feature space by an inner-product kernel function ) , ( i x x K G114 G114 . Table1 shows three typical kernel functions [10]. An optimal hyperplane is constructed for separating the data in the high-dimensional feature space.... ..."

Cited by 31

### TABLE 12 Dependencies of Kernel Components on Nonkernel Components Induced by Kernel-on- Nonkernel-Dependency-Inducing Variables

### Table 1. Types of kernel functions

2000

"... In PAGE 3: ... If the two classes are non-linearly separable, the input vectors should be nonlinearly mapped to a high- dimensional feature space by an inner-product kernel function ) , ( i x x K . Table1 shows three typical kernel functions [8]. An optimal hyperplane is constructed for separating the data in the high-dimensional feature space.... ..."

Cited by 23

### Table 2. Success cross-results between kernel-cca amp; gener- alised vector space. (Linear kernel for image colour)

2003

"... In PAGE 4: ... This uses as a semantic feature vector the vector of inner products between either a text query and each training label or test iamge and each training image. As shown in Tables 2 and 3 we compare the perfor- mance of the kernel-cca algorithm and generalised vector space model, where in Table2 we use a linear kernel as above for the image colour while in Table 3 we use a Gaus- sian kernel wtih AR BP max distance/20 . In both cases the kernel CCA method sharply outperforms GVSM.... ..."

Cited by 11