### Table 6 Complexity of the Bayesian network classifiers and C4.5 Naive Bayes 16 nodes and 15 arcs

2001

"... In PAGE 11: ...looking at the classification perfor- mance, we also investigated the complexity of the generated classification models because from a marketing viewpoint, easy to understand, parsi- monious models are to be preferred. Table6 pre- sents the complexity of the generated Bayesian network and C4.5(rules) classifiers.... ..."

### Table 1 Test set Se and Sp of the neural network model in classifying cardiac beats for various training algorithms and number of units in the hidden layer

"... In PAGE 9: ... Table1 displays the experimental results obtained from the use of various training algorithms and different numbers of units in the hidden layer of the neural network. The Bayesian regularisation method, described in the previous section, was found to be the most effective.... ..."

### Table 5. Size of the Sample of Compounds Classified to Be Active by Each Method as a Percentage of the Whole Test Set for Each Target

2005

"... In PAGE 4: ... Although the study of structure-activity relationships is not within the scope of this paper, we speculate that ligands of DH and AE contain a subset of compounds with particular features, such as steroids, which render them more easily distinguishable from the bulk of nonactives. Because, in this study, the sample size is not kept fixed but allowed to vary according to the classification, high enrichment factors can be achieved with a high number of false negatives if the sample size is small ( Table5 ). For example, TV, yielding high enrichment factors, predicts in many cases a smaller number of compounds to be active than do other methods (Table 5).... In PAGE 4: ... Because, in this study, the sample size is not kept fixed but allowed to vary according to the classification, high enrichment factors can be achieved with a high number of false negatives if the sample size is small (Table 5). For example, TV, yielding high enrichment factors, predicts in many cases a smaller number of compounds to be active than do other methods ( Table5 ). For this reason, recall values were also calculated, which give the percentage of all actives which have been retrieved (Figure 1b).... ..."

### Table 2: Bayesian model of gesture data

2005

"... In PAGE 10: ... It is possible that these gestures are more relevant to sentence segmentation, reducing the performance of the gesture model on monologues. Table2 shows a Bayesian gesture model that was learned from a subset of our cor- pus. Without the help of a language model, this model classifies sentence boundaries... ..."

Cited by 1

### Table 2: EM algorithm for the hit-miss mixture model.

2005

"... In PAGE 65: ...3 indicate that case based precision estimates may allow for more robust decision rules when the loss function is asymmetrical. Table2 shows that overarather large domain in the UCI mushrooms data set (five classifiers trained on different portionsofthedatabaseandeachappliedto100cases)theBayesianbootstrapbaseddecision rule has better specificity, and lower average loss for variable misclassification costs. A plausible explanation for this is the tendency of the naive Bayes classifier to probability overshoot , i.... ..."

### Table 1: Dependency probability representations in a Bayesian network

2003

"... In PAGE 2: ... 3 Conditional Probability Representation Before presenting the augmentation algorithms allowing mixed-mode data, it is necessary to investigate how to model the dependency between two variables with arbitrary types. Table1 summarizes the probability representation models used in this work. The methods in [1, 5] are refered in this work.... ..."

### Table 3. Bayesian combination methods

"... In PAGE 4: ...Thereject results of a classifier were used in finding the optimal product set. The five classifiers, shown in Table 2, were evaluated by the Bayesian combination methods abbreviated as in Table3 and the BKS method. From the Figure 2, the second-order dependency provides higher performance than the first-order dependency, however, the third-order depen- dency does not provide higher performance than the second- order dependency in all groups.... ..."

### Table 1 Matrix of finite element analyses for oC128-center cracks (five runs per model)

"... In PAGE 7: ...ipe in Fig. 2 with Rm 50:8 mm (2 in.) and t = 5.08 mm (0.2 in.). A matrix of such analyses is defined in Table1 for various combinations of the crack size, crack orientation, and material strain hardening exponent: y=p, c, and n. It involves 21 diC128erent finite element meshes with y=p 1=16, 1/8, and 1/4 and c 0, 15, 30, 45, 60, 75, and 908.... In PAGE 7: ... In this paper, a crack will be denoted as small, intermediate, and large, when y=p 1=16, 1/8, and 1/4, respectively. For each mesh, five analyses were performed using five diC128erent hardening exponents (see Table1 ). For the material properties, the following values were used: E = 207 GPa, n 0:3, s0 344:8 MPa, and a 0 for n = 1 and a 1 for n gt; 1: These values, in addition to the ones given in Table 1, provide complete characterization of the pipe material properties according to Eqs.... In PAGE 7: ... For each mesh, five analyses were performed using five diC128erent hardening exponents (see Table 1). For the material properties, the following values were used: E = 207 GPa, n 0:3, s0 344:8 MPa, and a 0 for n = 1 and a 1 for n gt; 1: These values, in addition to the ones given in Table1 , provide complete characterization of the pipe material properties according to Eqs. (1) and (3).... In PAGE 9: ...4. Results and discussions According to Table1 , 21 finite element meshes similar to the ones in Fig. 4 or Fig.... ..."

### Table 2. Clustering results after fitting various number of components (k) and for each dietary quality group comparisona

"... In PAGE 8: ... Hence, LOW is clearly different from HIGH and MED, which just differ in degree. Model-Based Clustering and ANOVA Results A summary of the mixture model fitting results is given in Table2 , where the values for the various model selection criteria (AIC, BIC, and LRT) are presented for each model ranging from one to five components. For all three comparisons, there was a dramatic decrease in both AIC and BIC, along with an increase in the log- likelihood when moving from a mixture model with one component to a model with two components.... ..."