### Table 2. Performance of Bayesian classifiers based on various similarity networks (SN) compared with 1NN classifier, sum classifier and SVM classifier

"... In PAGE 7: ...works and competing techniques is depicted in Table2 . This table clearly indicates that our integrated Bayesian approach out- performs all the competing techniques by achieving correct classi- fication accuracies of 96.... ..."

### Table 8: The predictive accuracy for subcellular locations of single Bayesian classifier (BC) and hierarchical ensemble of Bayesian classifiers (HensBC) for Data_Apoptosis.

2006

"... In PAGE 4: ...9%. Table8 presents the results with Apoptosis proteins. Even with the single Markov chains based Bayesian classifier we reached the overall accuracy of 85.... ..."

### Table 4. A comparison of two approaches to extending the Bayesian classifier.

"... In PAGE 22: ... This causes the Cartesian product of two discretized attributes to have 25 values, instead of 100, and leads to substantially more reliable probability estimates, given that the training set sizes are in the hundreds. The domains and training set sizes appear in the first two columns of Table4 . The remaining columns display the accuracy of the Bayesian classifier and extensions, averaged over 24 paired trials, and found by using an independent test set consisting of all examples not in the training set.... In PAGE 22: ... The remaining columns display the accuracy of the Bayesian classifier and extensions, averaged over 24 paired trials, and found by using an independent test set consisting of all examples not in the training set. In Table4 , Accuracy Once shows results for the backward stepwise joining algorithm of Pazzani (1996), forming at most one Cartesian product as determined by the highest accuracy using leave-one-out cross validation on the training set; Entropy Once is the same... ..."

### Table 3: Accuracy results using the Naive Bayesian classifier with feature selection.

1998

"... In PAGE 7: ... Since we seek both to measure the performance of Naive Bayes on an abso- lute scale, as well as the relative effects of feature selection, we run Naive Bayes several times on each data set, using a different number of features in each case. We employ 10- fold cross-validation [23] for evaluation and report both the average classification accuracy and standard deviation over these 10 runs for each entry in Table3 . We also provide the overall average over all five data sets for each feature selec- tion regime.... ..."

Cited by 39

### Table 3: Accuracy results using the Naive Bayesian classifier with feature selection.

1998

"... In PAGE 7: ... Since we seek both to measure the performance of Naive Bayes on an abso- lute scale, as well as the relative effects of feature selection, we run Naive Bayes several times on each data set, using a different number of features in each case. We employ 10- fold cross-validation [23] for evaluation and report both the average classification accuracy and standard deviation over these 10 runs for each entry in Table3 . We also provide the overall average over all five data sets for each feature selec- tion regime.... ..."

Cited by 39

### Table 3: Accuracy results using the Naive Bayesian classifier with feature selection.

"... In PAGE 7: ... Since we seek both to measure the performance of Naive Bayes on an abso- lute scale, as well as the relative effects of feature selection, we run Naive Bayes several times on each data set, using a different number of features in each case. We employ 10- fold cross-validation [23] for evaluation and report both the average classification accuracy and standard deviation over these 10 runs for each entry in Table3 . We also provide the overall average over all five data sets for each feature selec- tion regime.... ..."

### Table 2. Performance of Bayesian classifiers based on various similarity networks (SN) compared to 1NN classifier, sum classifier and SVM classifier. The values represent the percentage of correct classifications. Similarity networks with Bayesian integration clearly outperform the competing techniques.

### Table 14-1: CyberGrasp sensors.

"... In PAGE 3: ... DATA ACQUISITION The development of haptic devices is in its infancy. We have focused our research and experiments on the CyberGrasp exoskeletal interface and accompanying CyberGlove, which consists of 33 sensors ( Table14 -1). We use the CyberGrasp SDK to write handlers to record sensor data for our experiments whenever a sampling interrupt occurs.... In PAGE 3: ... We term each of these 10 letters a sign. The 22 sensor values (excluding sensors 23 to 33 in Table14 -1) are recorded in a log file for each sign made by a subject, termed as a session. Each session log file con- tains thousands of rows of sensor values sampled at some frequency, which depends on the sampling technique used.... In PAGE 14: ...92). Table14 -2 illustrates a comparison among the techniques. Table 14-2: Overall classification error.... In PAGE 14: ....92). Table 14-2 illustrates a comparison among the techniques. Table14 -2: Overall classification error. Error Standard Derivation C4.... In PAGE 14: ...e., C, G, and H ) quite well (see Table14 -3). Note that although C4.... In PAGE 15: ... Understanding of User Behavior in Immersive Environments Chapter 14 Bayesian Classifier decides based on probability distribution of the input samples, it tends to perform quite well overall despite intuitive variations in performance of signs by different subjects. Table14 -3: Best recognition technique for each sign. A B C D E F G H I L C4.... In PAGE 15: ...N C4.5 C4.5, B C4.5 Table14 -4: Nearest neighbors for each sign in multidimensional space. Nearest Farthest Avg.... In PAGE 15: ...693604 L G D A E H C I F B 2.697530 As illustrated in Table14 -8,1 we see that the best classifier for a sign is not necessarily the one that confuses the sign with fewer other signs. The decision to choose a classifier from given classifiers becomes very much application-dependent.... In PAGE 16: ...253 Bayesian Classifier and an appropriate I/O design we can achieve an acceptable perform- ance. Table14 -5: C4.5 Precision and recall.... In PAGE 16: ... 0.00 0.00 0.00 6.66 6.66 0.00 6.66 0.00 0.00 80.00 Table14 -6: Bayesian precision and recall. A B C D E F G H I L A 86.... In PAGE 17: ... Understanding of User Behavior in Immersive Environments Chapter 14 Table14 -7: Neural network precision (standard deviation) and recall. A B C D E F G H I L A 71.... In PAGE 17: ...57, 9.32 Table14 -8: Number of other signs with which each sign is confused for different classifiers. Sign Bayesian C4.... ..."

### Table 5 Validation Statistics by Sensor

2001

"... In PAGE 11: ... In 91% of the cases, the wake behavior was safely bound or not operationally significant. The same buffer categories are then broken out by sensor in Table5 . A general observation of interest is that the pulsed lidar had by far the highest percentage of valid wake files.... In PAGE 11: ... Figures 3 and 4 show the frequency of various magnitudes of negative buffer times for each sensor. As shown in Table5 and Figure 4, all hard exceedances were measured with the CW lidar. In these cases, the predicted demise was less than Taumin, and 75% of the observed demise times exceeded Taumin by less than 10sec.... ..."