### Table 1. Extrapolateability to New Regions of Nuclei. Original nuclei New nuclei

"... In PAGE 6: ...rror for 217 newly measured masses21) is 0.737 MeV, corresponding to an increase of 15%. Some caution should again be exercised when extrapolating this model to new regions of nuclei. These theoretical errors are summarized in Table1 , where we also include|because of its frequent use in astrophysical calculations|the 1976 mass formula of von Groote, Hilf, and Takahashi.16) For this mass formula with empirical shell terms whose parameters are extracted from adjustments to experimental masses, the theoretical error for 217 newly mea- sured masses21) increases by 104% relative to the theoretical error for 1323 nuclei whose masses were known experimentally in 1977.... In PAGE 6: ... Finally, for one particular neural network of Gernoth, Clark, Prater, and Bohr that had not been pruned for improved extrapolateability,18) the increase in root-mean-square error for newly measured masses relative to that for masses included in the original training is 772%. These last two examples are not included in Table1 because the regions of newly measured masses are di erent from those in the table and because we have available only the root-mean-square errors, which are contaminated by contributions from experimental errors. 4.... ..."

### Table 1. Approximate mean total number of cells analyzed per mouse by brain region

"... In PAGE 9: ... Four adjacent sections were analyzed bilaterally for each brain region from each mouse. Approximate mean total numbers of cells ana- lyzed per mouse for each brain region are given in Table1 . There were a total of 32 mice used for catFISH (4 mice H11503 2 geno- types H11503 4 conditions).... ..."

### Table 1. WDBC/WPBC cell nuclei characteristics attributes

"... In PAGE 5: ... The topology of the prognosis neural network was 14-193-4-4. The input layer consists of 14 nodes, which correspond to the prognosis status, the TTR or the DFS time, the ten cell nuclei characteristics attributes of Table1 , the diameter of the excised tumour and the number of positive axillary lymph nodes observed at time of surgery. It must be noted that due to the small amount of WPBC instances, the standard error and the worst values from the ten real-valued features were removed in order to avoid the curse of dimensionality problem during the training phase of the artificial neural network.... ..."

### Table 1. Rates of segmentation results for the nuclei and the cytoplasm of serous cells. Expert

"... In PAGE 9: ... Thanks to the visual inspection process, all incorrectly segmented cells have been listed. The results of the segmentation process for each individual expert are presented in Table1 . The percentage of correct segmentation for the cells range from 89.... ..."

### Table 1: High-level Summary of the Cascade-Correlation Algorithm.

1991

"... In PAGE 2: ...rganized into distinct layers, but rather are cascaded; i.e., they are arranged in a sequence wherein each hidden unit receives activation from all input units and all hidden units earlier in the sequence. The cascade-correlation architecture supports a variety of learning algorithms, but the algorithm specifically described by Fahlman and Lebiere [4], outlined in Table1 , works as follows. An initial, minimal network under the cascade-correlation architecture (a network that has only the input and output units required by the task at hand) is trained completely using the quickprop algorithm [3].... ..."

Cited by 9

### Table 4. Throughput of TCP over ATM using AAI Network

in Using Measurements to Validate Simulation Models of TCP/IP over High Speed ATM Wide Area Networks

1996

"... In PAGE 4: ... A TCP window size of 256 KB is used. The results are shown in Table4 . Once more, the results show that simulations can accurately predict the performance of complex high speed ATM wide area networks.... ..."

Cited by 7

### Table 2: Neural cell de nitions

1995

"... In PAGE 5: ... When simulated using the simple yet general pur- pose neural network simulator described in the text, this network performs the error back-propagation algorithm. Each cell is labelled by its type (see Table2 ) and its partial order (all cells ordered 0 are evaluated, then those labelled 1, and so on. The inputs are at the left of the diagram, and the outputs at the right.... ..."

Cited by 15

### Table 1: The quantitative comparison of accurate segmentation ratio between different segmentation methods

2003

"... In PAGE 6: ... Even in very high noisy situation such as 7% noise level, there are just little artifacts in our segmentation results in Figure 3(l). Table1 shows the quantitative results of segmentation ratio, which is defined as the ratio of the number of accurately labeled voxels over the number of whole voxels. A B ... ..."

Cited by 1

### Table 1. Comparison of learning time between two neural networks

2005

"... In PAGE 4: ...GBNN. Fig.4 is the segmentation result of the three-layer RBFNN, which uses competitive learning algorithm at the first layer and the gradient-descent learning algorithm at the second layer. Table1 compares the learning time of two neural networks, and Table 2 compares the segmentation accuracy of the two methods. Figure 2.... In PAGE 5: ...6.4% 78.7% 73.6% 94.3% From the results of Table1 , we can see that the learning speed of the FGBNN is faster than that of the three-layer RBFNN. From the results of Table 2, we can also conclude that the segmentation accuracy of the FGBNN is higher than that of the three-layer RBFNN.... ..."

### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."