### Table 1: Algorithm for planning in low-dimensional belief space.

in Abstract

"... In PAGE 4: ... Our conversion algorithm is a variant of the Augmented MDP, or Coastal Navigation algorithm [9], using belief features instead of entropy. Table1 outlines the steps of this... ..."

### Table 3.1 Low-dimensional datasets used in experimental evaluation

### Table 3.1 Low-dimensional datasets used in experimental evaluation

### Table 1. Upper bounds on the coding gain of low-dimensional lattices 10

1999

"... In PAGE 8: ...2 can be converted into an upper bound on the highest possible coding gain that may be achieved, at speci c symbol error probabilities, using any n-dimensional lattice. The resulting bounds for n = 1; 2; ; 32 are summarized in Table1 , and compared with the nominal coding gains of the best known lattices in the corresponding dimensions. Coding gain is usually de ned in terms of the signal-to-noise ratios required by the coded and uncoded systems to achieve a given probability of error.... In PAGE 8: ... For the sake of brevity, we only consider the case where n is even. The development for n odd is similar [27], and the results are summarized in Table1 for all odd n 31. For even n, let k = n=2 and de ne the function gk(x) def= e?x 1 + x 1! + x2 2! + + xk?1 (k?1)! ! (14) The lower bound (9) of Theorem 2.... In PAGE 10: ...heorem 2.4. Let be an n-dimensional lattice, and let n = 2k. Then the coding gain of over Zn is upper bounded by e ( ) (k; Pe)2 z(k; Pe) 4(k!)1=k (19) The coding gain e ( ) is de ned in terms of lattice SNR, and the foregoing bound is parametrized by both the dimension and the probability of symbol error. The bound of (19) is tabulated for normalized error probabilities P e = 10?5; 10?6; 10?7 and dimensions n 32 in Table1 . All the entries in Table 1 are given in terms of dB.... In PAGE 10: ...4 is not asymptotic for Pe ! 0; it is reasonably tight for symbol error rates of practical interest. As can be seen from Table1 , it is generally much tighter than the results obtained by computing the nominal (asymptotic for Pe ! 0) coding gains of the densest known n-dimensional lattices, and/or the upper bounds thereupon.... ..."

Cited by 5

### Table 1: Categories of models of visual cortical maps, and their abbreviations as used in this article. Two versions of the self-organizing map model were investi- gated: SOM-h (high-dimensional weight vectors) and SOM-l (low-dimensional feature vectors).

1995

"... In PAGE 8: ... In- creasingly detailed comparisons between model and experimental data will be included along with each point. To ease comparisons, we group models into categories based on similarities in goals or implementation, Table1 . Structural and spectral models attempt to characterize map patterns using schematic drawings or concise equations.... In PAGE 26: ... The results of our comparison between model predictions and experimental data obtained from the upper layers of macaque striate cortex are summarized in table 2. References to articles on each model are given in Table1 . Many of the models are also brie y described in the appendix.... ..."

Cited by 64

### Table 1. Stripped-down low-dimensional models. The models have been simpli ed by elimination of all terms that do not a ect the threshold exponent .

1997

"... In PAGE 10: ... Thus: let us remove them. Table1 summarizes some of the mod- els we have discussed, but with nonlinear terms removed that do not a ect . We now present heuristic arguments that explain the threshold exponents we have observed.... ..."

Cited by 13

### Table 2. The proportion of square modular matrices of low-dimensional kernel.

1999

Cited by 19

### Table 1: Recognition performance measured as d apos; for the range and texture map data. di erent tasks. With face images, this analysis has shown that di erent low dimensional representations of faces are optimal for recognition versus catego- rizations (e.g., sex and race classi cations [10]). This analysis is likely to prove fruitful for the present stim- uli since we are able to separate texture versus surface based information.

"... In PAGE 5: ... This technique is commonly applied in the psychological literature to measure human recogni- tion memory for faces and serves as summary mea- sure data for the more complete cosine distributions in each condition, which we will describe in a forth- coming paper using ROC analysis. These data appear in Table1 , for the surface maps (top), and for the texture maps (bottom). In each table cell, three values are given: (1) the hit rate: the proportion of times a learned face was correctly labeled learned; (2) the false alarm rate: the propor- tion of times an unlearned face was incorrectly called learned; and (3) the d apos; : the discrimination index.... ..."