### Table 6: Timing for Matrices with Well-Separated Eigenvalues

1992

"... In PAGE 25: ... 7 Numerical Tests on CM 5 Our parallel implementation on CM 5 is a great success in sense that the best speedup is achieved on all kinds of matrices wehave tested. Table6 lists times consumed on matrices whose eigenvalues are well-separated, where p denotes the actual number of processors participating computations. To be more speci#0Cc, the diagonal entries of the matrices are 1; 2; #01#01#01;n and... In PAGE 26: ...27 0.24 To see howmuch speedup wehave gotten, we plot two #0Cgures showing speedups hiding in Table6 and Table 7. Glued Wilkinson matrices are always an interesting test matrices.... ..."

### Table 2 lists momenta for other 2-body channels which are likely to appear in the + ? data sample; the !! peak should be well separated from the others. Note

"... In PAGE 4: ... Table2 : Momenta for pp annihilations into two mesons. that only !! events will contribute to the peak in the momentum spectrum.... ..."

### Table 2. Performance of the model in identifying an optimal discriminant feature space. The within-class and the between-class scatters for both the classes (Click:Class 1andNo: Class 2) in the standard and the proposed discriminant-based feature spaces are shown. The values are computed by considering the events to be comprising of 3 actions. Low within-event and high between-event scatter values indicate that our approach identifies a feature space wherein the classes are compact and well-separated.

"... In PAGE 10: ... Optimality of the feature space is defined in terms of the compactness (low variance within an event) and the separability (high variance between events) of the classes. Low within-event and high between-event scatters shown in Table2 , after transforming the fea- tures to a discriminant-based feature space, support our claim that this method identifies an optimal discriminant feature set. The proposed approach is also not sensitive to the action extraction method used.... ..."

### Table 1. Average excess from optimum of greedy spanner (in percent) for Euclidean graphs.

2004

Cited by 2

### Table I. The Performance Comparison of Structures for Heterogeneous Networks Sparse Spanner Bounded Degree Communication Cost

2004

Cited by 18

### Table I. The performances comparison of structures for heterogeneous networks. Sparse Spanner Bounded Degree Communication Cost

2004

Cited by 18

### Table VII). For this simulation, we have extracted 3 targets from our large database of real recorded target trajectories. The targets were chosen so that they spent approximately one- half of the simulation in close proximity. The AP algorithm correctly chooses to use IP during the half of the simulation where targets well separated and CP during the other half, which results in the stated reduction in computation.

2005

Cited by 7

### Table 4.1 is a summary of the synthetic data sets: Gauss 5c contains 5 classes, each class has 5,000 2-dimensional points generated from Gaussian distribution. We chose different mean for each class so that the data are well separated. Fig. (4.6) shows the distribution of Gauss 5c. similarly we generated Gauss 10c and Gauss 50c with varying dimen- sions and number of classes.

in Fast Nonparametric Machine Learning Algorithms for High-dimensional Massive Data and Applications

Cited by 1

### Table 2 presents the standard deviation relative to the value of every affine invariant under a 0:05 standard devi- ation noise and 5% of missing data, respectively. Due to the way we compute the affine invariants, by applying the inverse of the intrinsic reference system, the invariant ro- bustness is related to the reference system robustness. The two last columns of Table 2 show how well separated the scatter invariants are under noise and missing data.

"... In PAGE 5: ... Table2 . Percentage of error in invariants under 0:05 noise and under 5% missing data, respectively.... ..."