### TABLE 2 - Static Row Compression, Sparse Random Connectivity

in Techniques For The Efficient Execution Of Sparse Matrix Neural Network Algorithms On SIMD Machines

### TABLE 6 - Static Column Compression Sparse Random Connectivity

in Techniques For The Efficient Execution Of Sparse Matrix Neural Network Algorithms On SIMD Machines

### Table 13: The operators of the computer domain.

1994

"... In PAGE 51: ... Similarly, for it to be plugged in the device must be located with the reachofapower outlet. Table13 shows the operators for this domain. For this domain, Alpine generates the graph shown in Figure 10.... ..."

Cited by 5

### Table 3.1: Size of the various ANNs. Sparse refers to the connections between the input group and the recurrent layer and between the recurrent layer and the output layer. In both sets, the recurrent layer is sparsely connected.

1998

### TABLE 7 - Static Column Compression, Sparse Random Connectivity, with Static Row Compression

in Techniques For The Efficient Execution Of Sparse Matrix Neural Network Algorithms On SIMD Machines

### Table 3: Sparse Graphs (Pentium Pro, seconds)

1999

"... In PAGE 7: ... This requires slightly less overhead than computing the strongly connected components, and appears to work adequately in practice. 2 g f + + + + + + - - - - e T T 1 Figure 1: Constraints on Dual Change Values A computational comparison of the single-tree, multiple-tree, and the variable- quot; meth- ods is given in Table3 . The problem instances are sparse graphs derived from geometric... ..."

Cited by 43

### TABLE III SPARSE NETWORK SCENARIO, NO MOBILITY; E2E IS END-TO-END. CET IS THE CONNECTION EXPIRATION TIME.

2003

Cited by 4

### Table 2.2: Simulation of Sparsely Connected Single Layer Networks # of PE apos;s Total Storage Time per Update Linear Array [Kun88] m N O(m2N) O(mN) MCC

### Table 2. The classi cation error for SVDD, autoencoder (ANN) and Parzen classi er, on merged Higleyman classes for di erent selective sampling methods. The results are averaged over 20 runs. Note the scale di erences.

2003

"... In PAGE 5: ...1 on the training set. In Table2 the averaged results over 20 runs are presented. To see how well a classi er ts the data both errors EI and EII should be considered.... In PAGE 6: ... Because the low con dence region t x 0:5, and the high con dence region t x 0:5 for the target class are relatively close to each other compared to the low con dence region o x 0:5 and the high con dence region o x 0:5 for the outlier class, almost no di erence between performance of the lh and hh methods can be observed. Density based and distance based classi ers For density estimation classi ers based on: Parzen, gaussian distribution, mix- ture of gaussians or for other types like the nearest neighbor classi er, all selective sampling methods based on distances to a description boundary do not perform well; see Table2 . They spoil the density estimation.... ..."

Cited by 2

### Table 6. Sparse graphs

1996

"... In PAGE 17: ... We increased the number of nodes and de ned the number of edges to be 1.5 times the number of nodes and 2 times jV j (see Table6 ) for the unweighted case. Computational studies on randomly generated unweighted graphs showed that our algorithm can solve almost all instances of graphs with at most 40 edges.... ..."

Cited by 34