### Table 1. Parameters for butterfly, mesh and hypercube topology.

2005

"... In PAGE 4: ... Previous work [10] estimated the required number of nodes and ports for a petabyte-scale storage system us- ing butterfly, mesh and hypercube topology. We list the pa- rameters set up in our simulator in Table1 . The butterfly... ..."

Cited by 2

### Table 1. Parameters for butterfly, mesh and hypercube topology.

2005

"... In PAGE 4: ... Previous work [10] estimated the required number of nodes and ports for a petabyte-scale storage system us- ing butterfly, mesh and hypercube topology. We list the parameters set up in our simulator in Table1 . The butterfly network is a hierachical structure with one level of routers, three levels of switches with 128 switches per level.... ..."

Cited by 2

### Table 1: Feedback vertex set state tables computed for Example 21.

1996

"... In PAGE 16: ... This procedure essentially entails extending the parks with the current operator and reducing them by the rules given in Lemma 18, and combining park sets if the two F m n28n29 apos;s are equal in cases 1 and 2. Example 21 Table1 shows values of F m n28Sn29 for the application of the feedback vertex set algo- rithm to the 2-parse given in Example 9 on page 7. As can been seen by examining the graph in Example 9, a minimum feedback vertex set has cardinality 2, which corresponds to the minimum value in the last column.... ..."

Cited by 3

### Table 4.1 Lower Bounds on Minimum Test Set Size

### Table 1: Venus head mesh PSNR in dB 1 comparing the EQ mesh coder and the zerotree (ZT) mesh coder using the lifted and unlifted versions of the butterfly wavelet (BW) transform.

"... In PAGE 8: ... Figure 10 shows the R-D curves for the EQ mesh coder and the zerotree mesh coder and compares the dis- tortion with the normal remeshing error, which is the error between the original irregular mesh and the original normal mesh. PSNR values as a function of the bits-per-vertex are given in Table1 . We obtained similar results for the horse and rabbit normal mesh datasets.... ..."

### Table 1. Parameters for butterfly, mesh and hypercube topology.

2005

Cited by 2

### Table 4. Improvement in classification accuracy using majority voting ensembles. Optimal unweighted majority-voting ensemble classifiers were formed by selecting classifiers from all 8 classifiers for each feature set listed and the average classification accuracy for 10-fold cross-validation was calculated. A paired-t test was performed for each ensemble classifier against the previous neural network classifier for each feature subset (SLF15 and SLF16 were compared against the previous classifier for SLF8 and SLF13, respectively). Each ensemble classifier was also compared against the optimal classifier for each feature set listed in Table 2 (SLF15 and SLF16 were compared with the individual optimal classifiers for SLF8 and SLF13, respectively).

"... In PAGE 10: ... Therefore, we constructed an unweighted majority-voting ensemble of all possible combinations of the 8 classifiers for each feature set. Table4 shows the optimal majority-voting classifiers found for each feature set. The accuracies on both SLF8 and SLF13 feature sets were improved by 1% by combining three classifiers for each: exponential-rbf-kernel SVM, AdaBoost, and Bagging for SLF8; rbf-kernel SVM, AdaBoost, and Mixtures-of-Experts for SLF13.... In PAGE 12: ...11 SVM, exponential-rbf-kernel SVM, polynomial-kernel SVM, and AdaBoost for SLF16, and rbf-kernel SVM, exponential-rbf-kernel SVM, and polynomial-kernel SVM for SLF15 ( Table4 ). We achieved a 92.... In PAGE 12: ... The benefits of including the new texture features can be represented by a 2% improvement on classifying 2D protein fluorescence microscope images both with and without DNA features. Table4 also showed that the accuracy upper bounds for SLF16 and SLF15 are higher than those of SLF13 and SLF8 feature sets respectively. To gain insight into the basis for the improvement, we compared the distributions in the two feature spaces of those images that were misclassified by the neural network classifier using SLF13 but were correctly classified by the ensemble classifier using SLF16.... In PAGE 13: ... Furthermore, the relatively independent errors (Table 3) among the classifiers of a majority-voting ensemble contribute to a more robust prediction. For instance, linear-kernel SVM, one of the five classifiers in the best performing ensemble classifier of SLF16 ( Table4 ), predicted the image of the transferrin receptor pattern in Figure 6 as tubulin, while all other classifiers in the ensemble made the accurate prediction. This error would not be avoided if the linear-kernel SVM was selected as the only classifier.... In PAGE 14: ... Firstly, different image set sizes were tested for the six feature sets. Figure 8A shows the average performance of the majority-voting classifier for each feature set ( Table4 ) over 1000 random trials of image sets drawn from each class in the test set. The dominant predicted class in an image set was taken as the output, while random choice was made if several classes tied.... ..."

### Table 1: Test meshes.

1999

"... In PAGE 17: ... The test meshes have been chosen to be a representative sample of medium to large scale real-life problems and include both 2D and 3D examples of nodal graphs (where the mesh nodes are partitioned) and dual graphs (where the mesh elements are partitioned). Table1 gives a list of the meshes and their sizes; since none of the graphs are weighted the number of vertices in a17 is the same as the total vertex weight a24 a17 a24 and similarly for the edges a21 . Note that t60k-full is a combination of the t60k nodal graph and t60k dual graph, with the addition of edges between vertices from t60k-dual which represent mesh elements and the vertices from t60k-nodal which represent their nodes.... In PAGE 21: ... It is of interest to ask what impact does the initial distribution have on the outcome of the nal partition. In Table 9 we compare four different initial distribution schemes for the two example meshes chosen from the test set in Table1 . The cyclic distribution assigns vertex a231 to processor a67 if a231 modulo a9 a14a160a67 , i.... ..."

Cited by 40

### Table 7.1: Parameters for butterfly, mesh and hypercube topology. parameter butterfly mesh hypercube

2005

### Table 1: Test meshes.

1999

"... In PAGE 14: ... The test meshes have been chosen to be a representative sample of medium to large scale real-life problems and include both 2D and 3D examples of nodal graphs (where the mesh nodes are partitioned) and dual graphs (where the mesh elements are parti- tioned). Table1 gives a list of the meshes and their sizes; since none of the graphs are weighted the number of vertices in CE is the same as the total vertex weight CYCE CY and similarly for the edges BX. Note that t60k-f is a combination of the t60k nodal graph and t60k dual graph, with the addition of edges between vertices from t60k-d which represent mesh elements and the vertices from t60k-n which represent their nodes.... In PAGE 16: ... It is of interest to ask, therefore, whether this is possible and indeed what impact the initial distribution has on the outcome of the final partition. In Table 4 we compare four different initial dis- tribution schemes for the two example meshes chosen from the test set in Table1 . The cyclic distribution assigns vertex CX to processor D4 if CX modulo C8 BP D4, i.... ..."

Cited by 1