### Table 1. Average MAEs for both neighborhood dimensions high-dimensional low-dimensional

"... In PAGE 9: ... Figure 3 includes the Mean Absolute Errors for high (ib) and low (svd-ib) di- mensions, as observed for each of the 5 data splits of the data set. These error values are then averaged and Table1 records the flnal results for both implemen- tations. From both the preceding flgure and table, we can conclude that applying Item- based Filtering on the low-rank neighborhood, provides a clear improvement over the higher dimension neighborhood.... ..."

Cited by 1

### Table 1: Categories of models of visual cortical maps, and their abbreviations as used in this article. Two versions of the self-organizing map model were investi- gated: SOM-h (high-dimensional weight vectors) and SOM-l (low-dimensional feature vectors).

1995

"... In PAGE 8: ... In- creasingly detailed comparisons between model and experimental data will be included along with each point. To ease comparisons, we group models into categories based on similarities in goals or implementation, Table1 . Structural and spectral models attempt to characterize map patterns using schematic drawings or concise equations.... In PAGE 26: ... The results of our comparison between model predictions and experimental data obtained from the upper layers of macaque striate cortex are summarized in table 2. References to articles on each model are given in Table1 . Many of the models are also brie y described in the appendix.... ..."

Cited by 64

### Table 2: The matrices in these instances were generated with dependent matrices, as explained above. In this example again we note the same trends as for the first example: The gap between the static and the adaptable increases with the size of the uncertainty set, and the value of 2,4-adaptability is better for low-dimensional uncertainty sets than for high-dimensional uncertainty.

2007

### Table 2: A sample data set illustrates clusters embedded in subspaces of a high dimensional space.

2003

"... In PAGE 2: ... Hence, a good subspace clustering algorithm should be able to find clusters and the maximum associated set of dimensions. Consider, for example, a data set with 5 data points of 6 dimensional(given in Table2 ). In this data set, it is obvious that C = {x1, x2, x3} is a cluster and the maximum set of dimensions should be P = {1, 2, 3, 4}.... In PAGE 3: ...here sj is a vector defined as sj = (Aj1, Aj2, ..., Ajnj)T. Since there are possibly multiple states(or values) for a vari- able, a symbol table of a data set is usually not unique. For example, for the data set in Table2 , Table 3 is one of its symbol tables. BC BS A A A A B B B B C C C C D D D D BD BT Table 3: One of the symbol tables of the data set in Table 2.... In PAGE 3: ... For a given symbol table of the data set, the frequency table of each cluster is unique according to that symbol table. For example, for the data set in Table2 , let (C, P) be a subspace cluster, where C = {x1, x2, x3} and P = {1, 2, 3, 4}, if we use the symbol table presented in Table 3, then the corre- sponding frequency table for the subspace cluster (C, P) is given in Table 4. From the definition of frequency fjr in Equation (6), we have the following equalities: nj CG r=1 fjr(C) = |C|, j = 1, 2, .... ..."

Cited by 4

### Table 2. The proportion of square modular matrices of low-dimensional kernel.

1999

Cited by 19

### Table 1: Algorithm for planning in low-dimensional belief space.

in Abstract

"... In PAGE 4: ... Our conversion algorithm is a variant of the Augmented MDP, or Coastal Navigation algorithm [9], using belief features instead of entropy. Table1 outlines the steps of this... ..."

### Table 6: Behavior of V-detector at high dimensionality dimensionality detection rate SD false alarm rate SD number of detectors SD

2006

"... In PAGE 7: ... We generated the detector set using those points as self samples and then tried to classify 1000 test points that were randomly drawn from the entire hypercube. Table6 shows the results from n = 3 through n = 12.... ..."

Cited by 1

### Table 19. Test error (in % ), high-dimensional data sets.

2006

"... In PAGE 95: ... Table19 Cont. NMC KNNC LDC QDC natural textures Original 54.... ..."

### Table 1: Low Dimensional Data Sets Published Source Data Pub. Prop. Alg. Error (100/10 runs) Error best ave. s.d.

1997

"... In PAGE 4: ... For each Test Data point (Test Data was not used during learning), the outputs of the 10 regression functions were averaged to produce the nal approximation output, for which error results are reported. To test reproducibility, 100 independent approximations (independent with respect to random sequences of input variables and bootstrap samples as de ned in Section 2) were generated using the Learning Data: Table1 reports the best Test Set error, along with the average and standard deviation (s.d.... In PAGE 4: ... For each of the l00 learning sets a regression function was constructed (using 10 fold cross-validation as described above) and evaluated on the corresponding test set. Experimental results for the 100 experiments are given in Table1 . The previously published error for the [Breiman, 1996] data refers to the average bagged error reported.... In PAGE 4: ... As reported in Section 2, the ordering of the independent variables is random, and the construc- tion of the 2 dimensional functions gl( ) is done using a (random) bootstrap sample of the training data. The best and average relative mean squared test set error, and its standard deviation, are reported in Table1 . From Table 1, it is evident that although there is some varia- tion in error performance from run to run, the stochastic e ect of the algorithm is mostly negligible.... In PAGE 4: ... The best and average relative mean squared test set error, and its standard deviation, are reported in Table 1. From Table1 , it is evident that although there is some varia- tion in error performance from run to run, the stochastic e ect of the algorithm is mostly negligible. The previ- ously published error given in Table 1 under the [Ras- mussen, 1996] data sets is the best reported error (indi- cated by brackets) of the 5 algorithms evaluated, and was obtained from the graphs presented in the paper.... In PAGE 4: ... From Table 1, it is evident that although there is some varia- tion in error performance from run to run, the stochastic e ect of the algorithm is mostly negligible. The previ- ously published error given in Table1 under the [Ras- mussen, 1996] data sets is the best reported error (indi- cated by brackets) of the 5 algorithms evaluated, and was obtained from the graphs presented in the paper. For both the [Breiman, 1996] and [Rasmussen, 1996] data sets, the average learning time ranged from about 1 to 10 minutes per approximation (all learning times reported in this paper are for proposed algorithm running on a Pentium Pro 150 using LINUX).... In PAGE 4: ... In order to estimate the reproducibil- ity of proposed algorithm on this data, we constructed 10 independent approximations. The best and average relative error on the validation set (in accordance with [Jordan and Jacobs, 1994]), and its standard deviation over these 10 independent experiments, is reported in Table1 . The previously published error, shown in Ta- ble 1, is the best relative error (on the validation data) of the 7 algorithms studied in [Jordan and Jacobs, 1994].... In PAGE 5: ....1042 s.d. 0.0003 size of each cascade was about 1 M-byte. 1e+06 2e+06 3e+06 4e+06 5e+06 6e+06 0 200 400 600 800 1000120014001600 Approximation Size (Bytes) Data Dimension Data With No Noise Figure 1: No Noise Data: Increase in Approximation Size with Dimension From Table1 , it is evident that the proposed algo- rithm demonstrated as good or better error results on all but 1 (the servo data) of the data sets. However, this result should be interpreted with caution.... In PAGE 5: ... The regression function con- tinues to grow in parameter size until the mean squared error can no longer be reduced: this is bene cial if one wants the \best quot; approximation, but detrimental if one wants a representation of xed size. Most algorithms referred to in Table1 are parametric and therefore of xed size. Two exceptions are MARS and CART.... ..."

Cited by 5

### Table 4: Communication cost for low-dimensional hypercube matrices with domain partitioning for p = 100

1994

"... In PAGE 22: ...c 4 2nz(A)=p = 2p nz(A) : (21) This implies that problems with more than 200,000 nonzeros can be solved efficiently on a 100-processor BSP computer with l 1000. 7 Results for structure dependent distributions Table4 shows the normalised communication cost for hypercube matrices of distance one and dimension d = 2;; 3;; 4, distributed by domain partitioning of the corresponding hypercube graph. The radix r is the number of points in each dimension, and P k ;; 0 k lt;d , is the number of subdomains into which dimension k is split.... In PAGE 22: ... The distribution of the grid points and hence of the vector components uniquely determines the distribution of the matrix. The results of Table4 show that the lowest communication cost for separate di- mension splitting is achieved if the resulting blocks are cubic. This is an immediate consequence of the surface-to-volume effect, where the communication across the block boundaries grows as the number of points near the surface, and the computation as the number of points within the volume of the block.... In PAGE 22: ... By symmetry, the same argument holds for sending. Therefore, the normalised communication cost for cubic partitioning is b = 2dp 1=d (4d + 1)r p 1=d 2r : (22) This formula explains the results for d = 2 and P 0 = P 1 = 10 in Table4 . It implies for instance that two-dimensional grid problems with more than 45 grid points per direction can be solved efficiently on 100-processor BSP computers with g 10.... ..."

Cited by 81