### Table 1: Examples of Carleman operator and their associated reproducing kernel. Note that functions

2003

Cited by 4

### Table 1: Complexity results for regular and strongly regular evolution frames.

"... In PAGE 39: ...f the belief set. Moreover, C5 accepts C1 iff BXBYBN C3BU CYBP BXBYC9CPCRCRCTD4D8. Since BXBY is contracting and BC-bounded, and since the membership tests BX BE BXBV and D6 BE BUCTD0B4A1B5, as well as the functions A5 and AQ are computable in time polynomial in the size of BXBY, deciding TEMPEVO in BXBY is EXPSPACE-hard. BE The complexity results obtained so far are summarized in Table1 . Further results can be derived by imposing additional meaningful constraints on the problem instances.... ..."

### Table 1: True generalization error for Gaussian, Wavelet, Sin/Sinc Kernels with Regular- ization Networks and Support Vector Regression for the best hyperparameters.

"... In PAGE 22: ... This is repeated for a hundred di erent datasets, and the mean and standard deviation of the generalization error are thus obtained. Table1 depicts the true generalization error evaluated on 200 datapoints for the two learning machines and the di erent kernels using the best hyperparameters setting. Analysis of this table leads to the following observation : The di erent kernels and learning machines give comparable results (all averages are within one standard deviation from each other).... In PAGE 23: ... Table 2 summarizes all these trials and describes the performance improvement achieved by di erent kernels compared to the gaussian kernel and sin basis functions. From this table, one can note that : - exploiting prior knowledge on the function to be approximated leads immediately to a lower generalization error (compare Table1 and Table 2). - as one may have expected, using strong prior knowledge on the hypothesis space and the related kernel gives considerably higher performances than gaussian kernel.... ..."

### Table 3: The effects of frames for learning on learning environment

### Table C{1: 1 Revolutions, 66 points Hidden Pop. Gener. Reproduced Learn/Gen

### Table 1. Comparison of several implementations of Q-learning on a task of obstacle avoidance. This figure is reproduced from [18].

### Table 1 Classification accuracy of structure, sequence and joint regularization kernel sequence kernel; best jr, joint regularization kernel with best parameterization; repetitions of same experiment (Kbest jr)

"... In PAGE 7: ....V.N. Vishwanathan et al. / Neurocomp report results as averages over all classes and all five repetitions in Table1 . Furthermore, as a control experi- ment, we ran the same classification experiment on all 206 proteins using the sequence kernel and the structure kernel matrix, respectively (Fig.... ..."

### Table 1. Accuracy in the Bongard domain (reproduced from [13])

2004

"... In PAGE 12: ...We ran LogAn-H on several sample sizes. Table1 summarizes the accuracy of learned expressions as a function of the size of the training set (200 to 3000) when tested on classifying an independent set of 3000 examples. The last column in the table gives the majority class percentage (marked bl for baseline).... ..."

Cited by 3