### Table 2: A sample data set illustrates clusters embedded in subspaces of a high dimensional space.

2003

"... In PAGE 2: ... Hence, a good subspace clustering algorithm should be able to find clusters and the maximum associated set of dimensions. Consider, for example, a data set with 5 data points of 6 dimensional(given in Table2 ). In this data set, it is obvious that C = {x1, x2, x3} is a cluster and the maximum set of dimensions should be P = {1, 2, 3, 4}.... In PAGE 3: ...here sj is a vector defined as sj = (Aj1, Aj2, ..., Ajnj)T. Since there are possibly multiple states(or values) for a vari- able, a symbol table of a data set is usually not unique. For example, for the data set in Table2 , Table 3 is one of its symbol tables. BC BS A A A A B B B B C C C C D D D D BD BT Table 3: One of the symbol tables of the data set in Table 2.... In PAGE 3: ... For a given symbol table of the data set, the frequency table of each cluster is unique according to that symbol table. For example, for the data set in Table2 , let (C, P) be a subspace cluster, where C = {x1, x2, x3} and P = {1, 2, 3, 4}, if we use the symbol table presented in Table 3, then the corre- sponding frequency table for the subspace cluster (C, P) is given in Table 4. From the definition of frequency fjr in Equation (6), we have the following equalities: nj CG r=1 fjr(C) = |C|, j = 1, 2, .... ..."

Cited by 4

### Table 1. Average MAEs for both neighborhood dimensions high-dimensional low-dimensional

"... In PAGE 9: ... Figure 3 includes the Mean Absolute Errors for high (ib) and low (svd-ib) di- mensions, as observed for each of the 5 data splits of the data set. These error values are then averaged and Table1 records the flnal results for both implemen- tations. From both the preceding flgure and table, we can conclude that applying Item- based Filtering on the low-rank neighborhood, provides a clear improvement over the higher dimension neighborhood.... ..."

Cited by 1

### Table 19. Test error (in % ), high-dimensional data sets.

2006

"... In PAGE 95: ... Table19 Cont. NMC KNNC LDC QDC natural textures Original 54.... ..."

### Table 2: Minimal network size for high-dimensional tori, in the form of a3 Ma25 Nmina5 .

2005

Cited by 12

### Table 2: Rate of correct dimensionality estimation for high dimensional data

### Table 4 Rate of correct dimensionality estimation for high dimensional data

### Table 1. Comparison of training and prediction times on a high dimensional problem

2005

Cited by 2

### Table 1. Comparison of training and prediction times on a high dimensional problem

2005

Cited by 2

### Table 1: Comparison of HDS with other related methods on some key features that make it ideally suitable for large, high- dimensional biological datasets. TC stands for time complexity.

2006

"... In PAGE 5: ... HDS performs shaving by ordering points by density, whereas Gene Shaving orders and shaves genes with least correlation with the principal component. Table1 compares key features of some of these approaches with our framework. Note that only Auto-HDS is capable of automatic cluster selection.... In PAGE 5: ... The ability to perform robust clustering and model selection is a key aspect of our framework. The set of all the selected clusters need not have the same density in Auto-HDS (last row in Table1 ), while OPTICS and DBSCAN are shown as being suitable for indexed low-d spatial data (their current popular usage).... ..."

### Table 1: A comparison of SVM and RLSC (Regularized Least Squares Classifica- tion) accuracy on a multiclass classification task (the 20newsgroups dataset with 20 classes and high dimensionality, around 50, 000), performed using the standard one vs. all scheme based on the use of binary classifiers. The top row indicates the number of documents/class used for training. Entries in the table are the fraction of misclassified documents. From [47].

2003

Cited by 48