### Table 19. Test error (in % ), high-dimensional data sets.

2006

"... In PAGE 95: ... Table19 Cont. NMC KNNC LDC QDC natural textures Original 54.... ..."

### Table 6: Behavior of V-detector at high dimensionality dimensionality detection rate SD false alarm rate SD number of detectors SD

2006

"... In PAGE 7: ... We generated the detector set using those points as self samples and then tried to classify 1000 test points that were randomly drawn from the entire hypercube. Table6 shows the results from n = 3 through n = 12.... ..."

Cited by 1

### Table 3: F1 and number of Support Vectors for top two Medline queries 5 Conclusions The paper has presented a novel kernel for text analysis, and tested it on a catego- rization task, which relies on evaluating an inner product in a very high dimensional feature space. For a given sequence length k (k = 5 was used in the experiments reported) the features are indexed by all strings of length k. Direct computation of

2002

Cited by 199

### Table 2: The matrices in these instances were generated with dependent matrices, as explained above. In this example again we note the same trends as for the first example: The gap between the static and the adaptable increases with the size of the uncertainty set, and the value of 2,4-adaptability is better for low-dimensional uncertainty sets than for high-dimensional uncertainty.

2007

### Table 3 summarizes the experiments performed. Spher- ical K-means is used for the last two datasets because they are so high-dimensional and non-Gaussian that regular K- means performs miserably on them [18]. Ensemble-A indi- cates the original ranges of a2 chosen. We found that, given

2002

"... In PAGE 10: ... Algo. Similarity Natural-a243 Ensemble-A Ensemble-B 8D5K K-Means Euclidean 5 a243a245a244a247a246 a248a66a249a150a248a98a249a92a250a19a251a13a252 a243a126a244a242a246 a253a126a249a92a250a254a249a32a255a57a252 PENDIG K-Means Euclidean 10 a243a245a244a247a246 a248a66a249a65a253a126a249a1a0a65a251a13a252 a243a126a244a247a246 a255a98a249a92a250a254a249a92a250a64a248a64a252 NEWS20 Spherical K-Means Cosine 20 a243a245a244a3a2a5a4a7a6a9a8a165a246a169a250a19a251a98a249a92a250a9a251a98a249a1a10a65a251a13a252 a243a126a244a242a246a169a250a18a253a245a249a32a248a98a249a150a248a11a10a57a252 YAHOO Spherical K-Means Cosine 20 a243a245a244a3a2a5a4a7a6a9a8a165a246a169a250a19a251a98a249a92a250a9a251a98a249a1a10a65a251a13a252 a243a126a244a242a246a169a250a18a253a245a249a32a248a98a249a150a248a11a10a57a252 Table3 . Details of the datasets and cluster ensembles with varying a243 .... ..."

Cited by 8

### Table 1. Average MAEs for both neighborhood dimensions high-dimensional low-dimensional

"... In PAGE 9: ... Figure 3 includes the Mean Absolute Errors for high (ib) and low (svd-ib) di- mensions, as observed for each of the 5 data splits of the data set. These error values are then averaged and Table1 records the flnal results for both implemen- tations. From both the preceding flgure and table, we can conclude that applying Item- based Filtering on the low-rank neighborhood, provides a clear improvement over the higher dimension neighborhood.... ..."

Cited by 1

### Table 4.5: High-dimensional stifi ODE system II: classical approach.

2005

### TABLE 6 Simulated Flit Traversal Energy (pJ) of High-dimensional Tori

### Table 2. Various methods and algorithms mentioned in section 3.1 and their ability to confront effectively the issues mentioned in the same section (Incremental Updates, Performance in Text Classification Tasks, High Dimensionality, Low Computational Cost, Concept Drift, Dynamic Feature Space).

"... In PAGE 7: ...7 complexity for training the filtering models, updating them and providing recommen- dations. In Table2 , we summarize the basic characteristics of the aforementioned systems in terms of the issues discussed in this section. Table 2.... ..."

### Table 1. The Isomap algorithm takes as input the distances dX(i,j) between all pairs i,j from N data points in the high-dimensional input space X, measured either in the standard Euclidean metric (as in Fig. 1A) or in some domain-speci c metric (as in Fig. 1B). The algorithm outputs coordinate vectors yi in a d-dimensional Euclidean space Y that (according to Eq. 1) best represent the intrinsic geometry of the data. The only free parameter (e or K) appears in Step 1.

"... In PAGE 2: ... These approxima- tions are computed efficiently by finding shortest paths in a graph with edges connect- ing neighboring data points. The complete isometric feature mapping, or Isomap, algorithm has three steps, which are detailed in Table1 . The first step deter- mines which points are neighbors on the manifold M, based on the distances dX(i,j) between pairs of points i,j in the input space X.... ..."