### Table 1: Breakdown of committed instruction types (64 players, small map).

2001

"... In PAGE 8: ... To gain insight on the low IPC we first report statistics on the mix of instructions executed by the server. As shown in Table1 , 15.7% of the instructions are branches, 63.... ..."

Cited by 9

### TABLE I Breakdown of committed instruction types (64 players, small map).

2001

Cited by 9

### Table 1: Comparing different approximations of the number of solutions of a small map coloring problem.

2005

Cited by 5

### Table 16. Maximum Final Localization Error (E5): Small-Scale Mapping Simulation

2003

"... In PAGE 13: ...able 15. Mean Final Map Landmark Error (E1): Small-Scale Mapping Simulation.................................. 78 Table16 .... In PAGE 92: ... Occasional high errors may occur due to a single bad landmark; the multi-robot SLAM approach prevents single measurements from producing bad landmarks and leading to poor localization. This is demonstrated by Table16 ; the first number indicates pose error (distance from actual location), and the second indicates heading error magnitude. Table 16.... ..."

Cited by 8

### Table VII. The Effect of Small Shifts on Different Representations of the Roadline Map

### Table 2: Server statistics: IPC, branch misprediction per- centage, and cache miss rates for L1 and L2 caches, divided in instruction and data misses (64 players, small map).

2001

"... In PAGE 9: ...akes place where relatively large datastructures (e.g., those representing the 3D world) are continuously traversed. Table2 reports the resulting IPC, branch missprediction and the instruction and data miss rates for the L1 and L2 caches. The branch misprediction rate is very high, about 14%.... ..."

Cited by 9

### TABLE II Server statistics: IPC, branch misprediction percentage, and cache miss rates for L1 and L2 caches, divided in instruction and data misses (64 players, small map).

2001

Cited by 9

### TABLE IV Computational complexity of the methods. Here N denotes the number of data samples, M the number of map units in the small map, and d the dimensionality of the input vectors. It has been assumed that the number of map units in the final map is chosen to be proportional to the number of data samples.

2000

Cited by 145

### Table 10.2: Computational complexity of the methods. Here N denotes the number of data samples, M the number of map units in the small map, and d the dimensionality of the input vectors. It has been assumed that the number of map units in the nal map is chosen to be proportional to the number of data samples.

### Table 2 Computational complexity of the methods. Here N denotes the number of data samples, M the number of map units in the small map, and d the dimensionality of the input vectors. It has been assumed that the number of map units in the final SOM is chosen to be proportional to the number of data samples. Computational complexity

"... In PAGE 12: ...4.2 Comparison of the computational complexity For very large maps the difference in the computation times is even more marked than in Table 1, but can only be deduced from the computational complexities given in Table2 (for details see [10]); in our largest experiments so far the theoretical speed-up was about O(d), that is, about 50,000-fold. In practice the speed-up is even larger since most of the methods reported in this section only reduce the (unknown) coefficients of the terms of Table 2.... ..."