### Table 1. Average MAEs for both neighborhood dimensions high-dimensional low-dimensional

"... In PAGE 9: ... Figure 3 includes the Mean Absolute Errors for high (ib) and low (svd-ib) di- mensions, as observed for each of the 5 data splits of the data set. These error values are then averaged and Table1 records the flnal results for both implemen- tations. From both the preceding flgure and table, we can conclude that applying Item- based Filtering on the low-rank neighborhood, provides a clear improvement over the higher dimension neighborhood.... ..."

Cited by 1

### Table 2: High Dimensional Data Sets Dim. No Noise Data Output Noise Data Input/Output Noise Data

1997

"... In PAGE 6: ... Each learning set is further divided into 30,000 training examples and 10,000 validation exam- ples: these are used to construct one cascade which forms the approximation for that data set. The 3 right most columns of Table2 contain the test set average relative mean squared error and corresponding standard devia- tions over 10 independent runs. Relative mean squared error is de ned, in the usual sense, as the mean squared error on the test data, divided by the variance of the test data.... In PAGE 6: ... Relative mean squared error is de ned, in the usual sense, as the mean squared error on the test data, divided by the variance of the test data. From Table2 it is evident that the relative error is small when no noise is present. With the addition of noise the relative error approaches the theoretical limit (due to the 3 to 1 signal to noise ratio) of 0.... ..."

Cited by 5

### Table 19. Test error (in % ), high-dimensional data sets.

2006

"... In PAGE 95: ... Table19 Cont. NMC KNNC LDC QDC natural textures Original 54.... ..."

### Table 2: A sample data set illustrates clusters embedded in subspaces of a high dimensional space.

2003

"... In PAGE 2: ... Hence, a good subspace clustering algorithm should be able to find clusters and the maximum associated set of dimensions. Consider, for example, a data set with 5 data points of 6 dimensional(given in Table2 ). In this data set, it is obvious that C = {x1, x2, x3} is a cluster and the maximum set of dimensions should be P = {1, 2, 3, 4}.... In PAGE 3: ...here sj is a vector defined as sj = (Aj1, Aj2, ..., Ajnj)T. Since there are possibly multiple states(or values) for a vari- able, a symbol table of a data set is usually not unique. For example, for the data set in Table2 , Table 3 is one of its symbol tables. BC BS A A A A B B B B C C C C D D D D BD BT Table 3: One of the symbol tables of the data set in Table 2.... In PAGE 3: ... For a given symbol table of the data set, the frequency table of each cluster is unique according to that symbol table. For example, for the data set in Table2 , let (C, P) be a subspace cluster, where C = {x1, x2, x3} and P = {1, 2, 3, 4}, if we use the symbol table presented in Table 3, then the corre- sponding frequency table for the subspace cluster (C, P) is given in Table 4. From the definition of frequency fjr in Equation (6), we have the following equalities: nj CG r=1 fjr(C) = |C|, j = 1, 2, .... ..."

Cited by 4

### Table 1: Comparing di erent robot learning paradigms based on how they address the credit assignment problem. Robot learning is one of the most interesting and di cult machine learning problems. While much progress has been made by many researchers using di erent paradigms, much more remains to be done in scaling up the algorithms to work with high-dimensional sensors such as vision, to handle partially observable states, to deal with continuous actions, and to deal with learning from limited number of examples. A good way to conclude this look 11

1996

"... In PAGE 11: ... However, designing a good simulator for the general problem of mobile robots operating in unstructured environments, such as a crowded o ce or lab, using high-dimensional sensors such as vision, is an enormously di cult task. 5 Discussion We now summarize the four learning paradigms in Table1 , according to how they address the credit assignment problem. In the inductive learning paradigm, the temporal credit assignment problem is solved by the teacher.... ..."

Cited by 11

### Table 2: Minimal network size for high-dimensional tori, in the form of a3 Ma25 Nmina5 .

2005

Cited by 12

### Table 3 and 4 show similar trends to those for high-dimensional tori. Linear load, larger buffer size and technology progress all make networks more likely to benefit from topology improvements, but technology progress also makes networks more sensitive to workload.

2005

"... In PAGE 4: ... In order for a hierar- chical torus to save energy, the following inequality must hold ER9 a29 N 2v a4 v 2 a31 a26 ER5 a6 N 2 a21 Na3 vER5 a18 ER9a5 a16 v2ER9 which is equivalent to v a16 ER9 ER5 (4) N a16 v2ER9 vER5 a18 ER9 (5) Inequality (4) determines the minimal express interval for a hier- archical torus to achieve better energy efficiency than a 2-D torus, and inequality (5) determines the minimal network size for a certain express interval. Table3 lists the minimal express interval and corre- sponding minimal network size in the form of a3 va25 Nmina5 . Table 3: Minimal express interval and corresponding minimal network size for hierarchical tori, in the form of a3 va25 Nmina5 .... In PAGE 4: ... Table 3 lists the minimal express interval and corre- sponding minimal network size in the form of a3 va25 Nmina5 . Table3 : Minimal express interval and corresponding minimal network size for hierarchical tori, in the form of a3 va25 Nmina5 . Linear load Constant load buffer size 4-flit 16-flit 4-flit 16-flit 0.... In PAGE 5: ... For hierarchical tori and express cubes, ER9 ER5 is also the minimal express interval. From Table3 and 5, the minimal express interval switches between 2 and 3 at 35nm technology for different load mod- els, so which topology is better depends on which load model is closer to reality. 4.... ..."

Cited by 12

### Table 1: Categories of models of visual cortical maps, and their abbreviations as used in this article. Two versions of the self-organizing map model were investi- gated: SOM-h (high-dimensional weight vectors) and SOM-l (low-dimensional feature vectors).

1995

"... In PAGE 8: ... In- creasingly detailed comparisons between model and experimental data will be included along with each point. To ease comparisons, we group models into categories based on similarities in goals or implementation, Table1 . Structural and spectral models attempt to characterize map patterns using schematic drawings or concise equations.... In PAGE 26: ... The results of our comparison between model predictions and experimental data obtained from the upper layers of macaque striate cortex are summarized in table 2. References to articles on each model are given in Table1 . Many of the models are also brie y described in the appendix.... ..."

Cited by 64