### Table 3 and 4 show similar trends to those for high-dimensional tori. Linear load, larger buffer size and technology progress all make networks more likely to benefit from topology improvements, but technology progress also makes networks more sensitive to workload.

2005

"... In PAGE 4: ... In order for a hierar- chical torus to save energy, the following inequality must hold ER9 a29 N 2v a4 v 2 a31 a26 ER5 a6 N 2 a21 Na3 vER5 a18 ER9a5 a16 v2ER9 which is equivalent to v a16 ER9 ER5 (4) N a16 v2ER9 vER5 a18 ER9 (5) Inequality (4) determines the minimal express interval for a hier- archical torus to achieve better energy efficiency than a 2-D torus, and inequality (5) determines the minimal network size for a certain express interval. Table3 lists the minimal express interval and corre- sponding minimal network size in the form of a3 va25 Nmina5 . Table 3: Minimal express interval and corresponding minimal network size for hierarchical tori, in the form of a3 va25 Nmina5 .... In PAGE 4: ... Table 3 lists the minimal express interval and corre- sponding minimal network size in the form of a3 va25 Nmina5 . Table3 : Minimal express interval and corresponding minimal network size for hierarchical tori, in the form of a3 va25 Nmina5 . Linear load Constant load buffer size 4-flit 16-flit 4-flit 16-flit 0.... In PAGE 5: ... For hierarchical tori and express cubes, ER9 ER5 is also the minimal express interval. From Table3 and 5, the minimal express interval switches between 2 and 3 at 35nm technology for different load mod- els, so which topology is better depends on which load model is closer to reality. 4.... ..."

Cited by 12

### Table 1: Categories of models of visual cortical maps, and their abbreviations as used in this article. Two versions of the self-organizing map model were investi- gated: SOM-h (high-dimensional weight vectors) and SOM-l (low-dimensional feature vectors).

1995

"... In PAGE 8: ... In- creasingly detailed comparisons between model and experimental data will be included along with each point. To ease comparisons, we group models into categories based on similarities in goals or implementation, Table1 . Structural and spectral models attempt to characterize map patterns using schematic drawings or concise equations.... In PAGE 26: ... The results of our comparison between model predictions and experimental data obtained from the upper layers of macaque striate cortex are summarized in table 2. References to articles on each model are given in Table1 . Many of the models are also brie y described in the appendix.... ..."

Cited by 64

### Table 1. Average MAEs for both neighborhood dimensions high-dimensional low-dimensional

"... In PAGE 9: ... Figure 3 includes the Mean Absolute Errors for high (ib) and low (svd-ib) di- mensions, as observed for each of the 5 data splits of the data set. These error values are then averaged and Table1 records the flnal results for both implemen- tations. From both the preceding flgure and table, we can conclude that applying Item- based Filtering on the low-rank neighborhood, provides a clear improvement over the higher dimension neighborhood.... ..."

Cited by 1

### Table 2: A sample data set illustrates clusters embedded in subspaces of a high dimensional space.

2003

"... In PAGE 2: ... Hence, a good subspace clustering algorithm should be able to find clusters and the maximum associated set of dimensions. Consider, for example, a data set with 5 data points of 6 dimensional(given in Table2 ). In this data set, it is obvious that C = {x1, x2, x3} is a cluster and the maximum set of dimensions should be P = {1, 2, 3, 4}.... In PAGE 3: ...here sj is a vector defined as sj = (Aj1, Aj2, ..., Ajnj)T. Since there are possibly multiple states(or values) for a vari- able, a symbol table of a data set is usually not unique. For example, for the data set in Table2 , Table 3 is one of its symbol tables. BC BS A A A A B B B B C C C C D D D D BD BT Table 3: One of the symbol tables of the data set in Table 2.... In PAGE 3: ... For a given symbol table of the data set, the frequency table of each cluster is unique according to that symbol table. For example, for the data set in Table2 , let (C, P) be a subspace cluster, where C = {x1, x2, x3} and P = {1, 2, 3, 4}, if we use the symbol table presented in Table 3, then the corre- sponding frequency table for the subspace cluster (C, P) is given in Table 4. From the definition of frequency fjr in Equation (6), we have the following equalities: nj CG r=1 fjr(C) = |C|, j = 1, 2, .... ..."

Cited by 4

### Table 19. Test error (in % ), high-dimensional data sets.

2006

"... In PAGE 95: ... Table19 Cont. NMC KNNC LDC QDC natural textures Original 54.... ..."

### Table 3 summarizes the experiments performed. Spher- ical K-means is used for the last two datasets because they are so high-dimensional and non-Gaussian that regular K- means performs miserably on them [18]. Ensemble-A indi- cates the original ranges of a2 chosen. We found that, given

2002

"... In PAGE 10: ... Algo. Similarity Natural-a243 Ensemble-A Ensemble-B 8D5K K-Means Euclidean 5 a243a245a244a247a246 a248a66a249a150a248a98a249a92a250a19a251a13a252 a243a126a244a242a246 a253a126a249a92a250a254a249a32a255a57a252 PENDIG K-Means Euclidean 10 a243a245a244a247a246 a248a66a249a65a253a126a249a1a0a65a251a13a252 a243a126a244a247a246 a255a98a249a92a250a254a249a92a250a64a248a64a252 NEWS20 Spherical K-Means Cosine 20 a243a245a244a3a2a5a4a7a6a9a8a165a246a169a250a19a251a98a249a92a250a9a251a98a249a1a10a65a251a13a252 a243a126a244a242a246a169a250a18a253a245a249a32a248a98a249a150a248a11a10a57a252 YAHOO Spherical K-Means Cosine 20 a243a245a244a3a2a5a4a7a6a9a8a165a246a169a250a19a251a98a249a92a250a9a251a98a249a1a10a65a251a13a252 a243a126a244a242a246a169a250a18a253a245a249a32a248a98a249a150a248a11a10a57a252 Table3 . Details of the datasets and cluster ensembles with varying a243 .... ..."

Cited by 8

### Table 2: Minimal network size for high-dimensional tori, in the form of a3 Ma25 Nmina5 .

2005

Cited by 12

### Table 4.5: High-dimensional stifi ODE system II: classical approach.

2005