### Table 1: Weight discretization in multilayer neural networks: o -chip learning.

"... In PAGE 4: ... neural network paradigms. A compact overview of a large variety of results on the e ects of limited precision in neural networks can be found in Table1 to 4. These tables list the number of bits that are required for satisfactory (learning) performance and brie y describe the core idea of the algorithms.... In PAGE 4: ... Only the forward propagation pass in the recall phase is performed on-chip whichmakes these quantization e ects amenable for mathematical analysis using a statistical model. Some of the results have been summarized in Table1 which indicate that the accuracy needed in the on-chip forward pass is around 8 bits. In [Pich e-95] a comparison between Heaviside and sigmoidal multilayer networks is given, showing that the weight precision required inaHeaviside network is much higher and even doubles when a layer is added to the network.... In PAGE 6: ...lgorithms with the entropy(number of bits) upper bounds of the data set [Beiu-96.2]. Finally,wewould like to point out that a comparativebenchmarking study of quantization e ects on di erent neural network models and the improvements that can be obtained byweight discretization algorithms has not yet been done. The accuracies listed in Table1 to 4 are therefore highly biased by... ..."

### Table 2. The learning process results for the recognition neural network

"... In PAGE 7: ... By using the three implemented algorithms: Rprop, Batch Backpropagation and On-Line Backpropagation; the topology 4800X10X3 showed to be a suitable topology to perform this recognition task. This topology means: 4800 neurons in the input layer (80X60 binary pixels), 10 neurons in the hidden layer and 3 neurons in the output layer (X, Y and P) Table2 present the results of the application of the cross-validation technique (10-folds) with the set neural network (4800X10X3) using the three implemented algorithms. In the Table we can observed that the result of both algorithms was almost the same with advantages to the Rprop algorithm that was faster than the others.... ..."

### Table 1. Comparison of learning time between two neural networks

2005

"... In PAGE 4: ...GBNN. Fig.4 is the segmentation result of the three-layer RBFNN, which uses competitive learning algorithm at the first layer and the gradient-descent learning algorithm at the second layer. Table1 compares the learning time of two neural networks, and Table 2 compares the segmentation accuracy of the two methods. Figure 2.... In PAGE 5: ...6.4% 78.7% 73.6% 94.3% From the results of Table1 , we can see that the learning speed of the FGBNN is faster than that of the three-layer RBFNN. From the results of Table 2, we can also conclude that the segmentation accuracy of the FGBNN is higher than that of the three-layer RBFNN.... ..."

### Table 4: Weight discretization in other neural network models.

"... In PAGE 5: ...2 Quantization E ects in Other Neural Network Models Also for other neural network models the e ects of a coarse quantization of the weightvalues on recall and learning have been investigated. The small number of weight discretization algorithms proposed can be partly explained from the fact that the required accuracy for successful learning in these models is lower than for gradient descent learning in multilayer networks ( Table4 ). An interesting example of a hardware implementation is Bellcore apos;s implementation of a Boltzmann machine and Mean-Field learning, whichallows on-chip learning with only 5-bit weights [Alspector-92].... ..."

### Table 1 Taxonomy of neural networks for feature extraction (Mao and Jain, 1995)

in Abstract

"... In PAGE 3: ... An arti5cial neural network model for conditional segmentation Neural networks as feature extraction and data pro- jection tools may be classi quot;ed according to both their mapping functions as linear and nonlinear ones as well as their learning methodology as supervised and unsuper- vised methods (Mao and Jain, 1995). For segmentation purposes, we typically consult unsupervised methods (see Table1 ); for discriminant analysis, on the other hand, supervised networks are applied. In the sequel, the com- ponents of our integrated approach will be shown and, quot;nally, the compositional methodology will be presented.... ..."

### Table 3. Comparative analysis of predictive ability for different neural networks

"... In PAGE 13: ... It should be noted that we also tried to leave larger fractions out, but even in the case of leave-two-out models, the predictive ability of the networks (expressed as q2) appeared to be reduced (data not shown). Different neural network architectures ( Table3 ) were automatically built as implemented in the NeuroSolution program and assessed using the LOO value. LOO works by leaving one data point out of the training set and giving the remaining instances (31 in the case of the CYP3A4 reaction set) to the learning algorithms for training.... In PAGE 13: ... A comparative LOO analysis was conducted on models trained using several different learning algorithms and the entire 24-descriptor set. The resulting values for average training (r2) and cross-validation (q2) coefficients are reported in Table3 . Among the neural networks tested, modular neural networks with 2 hidden layers provided the best predictive ability.... ..."

### TABLE 3. Performance of neural network implementations on workstations, parallel MIMD/ SIMD computers and dedicated neural network hardware (Adaptive Solution CNAPS).

1995

"... In PAGE 5: ... Cray T3E performance, communication overhead and scaleup measured with a TDNN network (4 layer, 4680 weights) used for promoter site detection, Backpropagation limit off-line learning algorithm and 3157 training patterns. System Software Performance [MCUPS] Comments TABLE3 . Performance of neural network implementations on workstations, parallel MIMD/... ..."

Cited by 1

### Table 7: Recommendations for Neural Network Use with Education Policy Analysis Questions

in Enhancing our Understanding of the Complexities of Education: "Knowledge Extraction from Data" using

"... In PAGE 22: ... (See Table 6) Table 6: Over and Under-representation of Asian/Pacific Island Students Group CHI FIL JAP KOR SEA PI SA WA ME OTH 1 -1% 3% -2% -4% 6% 4% -5% -1% -1% 1% 2 -1% 1% 4% -9% 5% 1% -1% -2% 4% -1% 3 0% 0% 1% 1% -3% -3% 3% 1% 1% -1% 4 9% -5% 1% 2% -7% 0% 0% -2% -2% 4% 5 -2% -3% -1% 6% -1% 0% 2% 0% -1% -1% Si milar discrepancies appear among Hispanic subgroups. Table7 suggests that the pattern of representation of the Hispanic aggregate group was substantially driven by the distribution of Mexican (MEX) students. Cuban students, to the contrary, were more likely to be found grouped with Asian/Pacific Island or White students than their Hispanic, Mexican counterparts.... In PAGE 22: ... Cuban students, to the contrary, were more likely to be found grouped with Asian/Pacific Island or White students than their Hispanic, Mexican counterparts. Table7 : Over and Under-representation of Hispanic Students Group MEX CUB PR OTHH 1 4.3% -1.... In PAGE 23: ... Yet, similar problems are likely to occur even when conventional methods are used. Table7 provides rough guidelines for applying neural networks to problems or questions related to education policy. Broadly speaking, the first two studies presented in this paper point to the particular value of hybrid neural/regression methods that apply neural or genetic algorithm estimation techniques to identify or construct a best predicting non-linear regression equation.... ..."

### Table 1. Comparison of the HCMAC neural network with the MHCMAC neural network Models

"... In PAGE 15: ... D. Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table 1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions.... In PAGE 15: ... Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table 1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions. Moreover, the learning structure of the self-organizing HCMAC neural network is expanded based on a full binary tree topology, but the MHCMAC neural network is expanded based on an exact binary tree topology.... ..."

### Table 5: Options for Neural Networks

1998

"... In PAGE 4: ... All but IBM have advanced learning options and employ cross-validation to govern when to stop. Table5 summarizes these properties. Table 5: Options for Neural Networks... ..."

Cited by 4