### Table 2. Comparison of the classification performance on the original and the Kalman filtered datasets.

"... In PAGE 5: ... 3 RESULTS AND DISCUSSION We applied the Kalman filtering on the previously described data- sets and for a comparative study SVM, ANN, 1NN and RF super- vised learning methods were evaluated in full gene set manner. Table2 summarizes the Accuracy and ROC scores we obtained. Evidently, the Kalman filter definitely improves the classification results of the ANN, 1NN and RF.... In PAGE 5: ... The principal component analysis (PCA) based filtering consists of removing the non-significant variance compo- nents computed using the eigen-decomposition of the covariance matrix of the training set. The PCA results with SVM are shown in Table2 . As opposed to PCA the Kalman filter retains the dataset in the original gene space and is also supervised procedure from a classification point of view.... ..."

### Table 2. Comparison of the classification performance on the original and the Kalman filtered datasets

2006

"... In PAGE 3: ... 3 RESULTS AND DISCUSSION We applied the Kalman filtering on the previously described datasets and for a comparative study SVM, ANN, 1NN and RF supervised learning methods were evaluated in full gene set manner. Table2 summarizes the Accuracy and ROC scores we obtained. Evidently, the KF definitely improves the classification results of the ANN, 1NN and RF.... In PAGE 4: ... The principal component analysis (PCA) based filtering consists of removing the non-significant variance components computed using the eigen-decomposition of the covari- ance matrix of the training set. The PCA results with SVM are shown in Table2 . As opposed to PCA the KF retains the dataset in the original gene space and is also supervised procedure from a classification point of view.... ..."

### Table 1: Generalization accuracies obtained with the variational Kalman filter (vkf) and sequential variational inference (svi).

1997

"... In PAGE 6: ... We also use the pima diabetes data set from [16]3. Table1 compares the generalization accuracies (in fractions) obtained with the variational Kalman filter with generalization accuracies obtained with sequential variational inference. The probability of the null hypothesis, a4 a5 a9a8a11a10a12a10 , that both classifiers are equal suggests that only the differences for the Balance scale and the Pima Indian data sets are significant, with either method being better in one case.... ..."

Cited by 2

### Table 1: Reconstruction results for the fixed linear and recursive Kalman filter. The table also shows how the Kalman filter results vary with lag times (see text).

2003

"... In PAGE 5: ... In the interest of simplicity, we consider a single optimal time lag for all the cells though evidence suggests that individual time lags may provide better results [15]. Using time lags of 0, 70, 140, 210 a7a9a8 we train the Kalman filter and perform reconstruction (see Table1 ). We report the accuracy of the reconstructions with a variety of error measures used in the literature including the correlation coefficient (a7 ) and the mean squared error (MSE) between the reconstructed and true trajectories.... In PAGE 5: ... We report the accuracy of the reconstructions with a variety of error measures used in the literature including the correlation coefficient (a7 ) and the mean squared error (MSE) between the reconstructed and true trajectories. From Table1 we see that optimal lag is around two time steps (or 140a7 a8 ); this lag will be used in the remainder of the experiments and is similar to our previous findings [15] which suggested that the optimal lag was between 50-100a7a9a8 . Decoding: At the beginning of the test trial we let the predicted initial condition equal the real initial condition.... In PAGE 5: ...eal initial condition. Then the update equations in Section 2 are applied. Some examples of the reconstructed trajectory are shown in Figure 2 while Figure 3 shows the reconstruction of each component of the state variable (position, velocity and acceleration in a11 and a14 ). From Figure 3 and Table1 we note that the reconstruction in a14 is more accurate than in the a11 direction (the same is true for the fixed linear filter described below); this requires further investigation. Note also that the ground truth velocity and acceleration curves are computed from the position data with simple differencing.... In PAGE 6: ... Compared with Figure 3, we see that the results are visually similar. Table1 , however, shows that the Kalman filter gives a more accurate reconstruction than the linear filter (higher correlation coefficient and lower mean-squared error). While fixed linear filtering is extremely simple, it lacks many of the desirable properties of the Kalman filter.... In PAGE 6: ... In that case we showed that acceleration was redundant and could be removed from the state equation. The data used here is more natural , varied, and rapid and we find that modeling acceleration improves the prediction of the system state and the accuracy of the reconstruction; Table1 shows the decrease in accuracy with only position and velocity in the system state (with 140ms lag). 4 Conclusions We have described a discrete linear Kalman filter that is appropriate for the neural control of 2D cursor motion.... ..."

Cited by 17

### Table 1: Reconstruction results for the fixed linear and recursive Kalman filter. The table also shows how the Kalman filter results vary with lag times (see text).

"... In PAGE 5: ... In the interest of simplicity, we consider a single optimal time lag for all the cells though evidence suggests that individual time lags may provide better results [15]. Using time lags of 0, 70, 140, 210 D1D7 we train the Kalman filter and perform reconstruction (see Table1 ). We report the accuracy of the reconstructions with a variety of error measures used in the literature including the correlation coefficient (D6) and the mean squared error (MSE) between the reconstructed and true trajectories.... In PAGE 5: ... We report the accuracy of the reconstructions with a variety of error measures used in the literature including the correlation coefficient (D6) and the mean squared error (MSE) between the reconstructed and true trajectories. From Table1 we see that optimal lag is around two time steps (or 140D1D7); this lag will be used in the remainder of the experiments and is similar to our previous findings [15] which suggested that the optimal lag was between 50-100D1D7. Decoding: At the beginning of the test trial we let the predicted initial condition equal the real initial condition.... In PAGE 5: ...eal initial condition. Then the update equations in Section 2 are applied. Some examples of the reconstructed trajectory are shown in Figure 2 while Figure 3 shows the reconstruction of each component of the state variable (position, velocity and acceleration in DC and DD). From Figure 3 and Table1 we note that the reconstruction in DD is more accurate than in the DC direction (the same is true for the fixed linear filter described below); this requires further investigation. Note also that the ground truth velocity and acceleration curves are computed from the position data with simple differencing.... In PAGE 6: ... Compared with Figure 3, we see that the results are visually similar. Table1 , however, shows that the Kalman filter gives a more accurate reconstruction than the linear filter (higher correlation coefficient and lower mean-squared error). While fixed linear filtering is extremely simple, it lacks many of the desirable properties of the Kalman filter.... In PAGE 6: ... In that case we showed that acceleration was redundant and could be removed from the state equation. The data used here is more natural , varied, and rapid and we find that modeling acceleration improves the prediction of the system state and the accuracy of the reconstruction; Table1 shows the decrease in accuracy with only position and velocity in the system state (with 140ms lag). 4 Conclusions We have described a discrete linear Kalman filter that is appropriate for the neural control of 2D cursor motion.... ..."

### Table 1 Kalman filtering algorithm

2005

"... In PAGE 13: ... 6 (a), (b), (c) and (d), respectively. Next, we applied the proposed Kalman filter given in Table1 to exponential samples and obtain the predicted and smoothed states. In Fig.... ..."

### Table 4: Kalman Filter Results

2002

"... In PAGE 6: ...Table 4: Kalman Filter Results We have used 3 special FUs in the kalman update function and 2 special FUs in the predict state function, the perfor- mance is better in the case of latter because there were more operations that could be mapped to these special FUs than the former case. As can be observed from Table4 , the num- ber of cycles have come down to less than half in the pre- dict state function, which implies a fairly large performance gain. 6.... ..."

Cited by 6

### Table 1: Generalization accuracies obtained with the variational Kalman filter (vkf) and sequential variational inference (svi).

2003

"... In PAGE 6: ... We also use the pima diabetes data set from [16]3. Table1 compares the generalization accuracies (in fractions) obtained with the variational Kalman filter with generalization accuracies obtained with sequential variational inference. The probability of the null hypothesis, C8D2D9D0D0, that both classifiers are equal suggests that only the differences for the Balance scale and the Pima Indian data sets are significant, with either method being better in one case.... ..."

Cited by 2

### Table 1: Generalization accuracies obtained with the variational Kalman filter (vkf) and sequential variational inference (svi).

1997

"... In PAGE 6: ... We also use the pima diabetes data set from [16]3. Table1 compares the generalization accuracies (in fractions) obtained with the variational Kalman filter with generalization accuracies obtained with sequential variational inference. The probability of the null hypothesis, Pnull, that both classifiers are equal suggests that only the differences for the Balance scale and the Pima Indian data sets are significant, with either method being better in one case.... ..."

Cited by 2

### Table 3: Kalman Filter AFUs

2002

"... In PAGE 6: ... In all, we introduced 5 MISOs with load/store to handle the various operations. The description of each AFU is shown in Table3 . The semantics are speci ed in the form of desti- nation as a function of sources, where si represents the ith source of the AFU an di represents the ith destination of the AFU.... ..."

Cited by 6