### Table 2. The learning process results for the recognition neural network

"... In PAGE 7: ... By using the three implemented algorithms: Rprop, Batch Backpropagation and On-Line Backpropagation; the topology 4800X10X3 showed to be a suitable topology to perform this recognition task. This topology means: 4800 neurons in the input layer (80X60 binary pixels), 10 neurons in the hidden layer and 3 neurons in the output layer (X, Y and P) Table2 present the results of the application of the cross-validation technique (10-folds) with the set neural network (4800X10X3) using the three implemented algorithms. In the Table we can observed that the result of both algorithms was almost the same with advantages to the Rprop algorithm that was faster than the others.... ..."

### Table 1: Weight discretization in multilayer neural networks: o -chip learning.

"... In PAGE 4: ... neural network paradigms. A compact overview of a large variety of results on the e ects of limited precision in neural networks can be found in Table1 to 4. These tables list the number of bits that are required for satisfactory (learning) performance and brie y describe the core idea of the algorithms.... In PAGE 4: ... Only the forward propagation pass in the recall phase is performed on-chip whichmakes these quantization e ects amenable for mathematical analysis using a statistical model. Some of the results have been summarized in Table1 which indicate that the accuracy needed in the on-chip forward pass is around 8 bits. In [Pich e-95] a comparison between Heaviside and sigmoidal multilayer networks is given, showing that the weight precision required inaHeaviside network is much higher and even doubles when a layer is added to the network.... In PAGE 6: ...lgorithms with the entropy(number of bits) upper bounds of the data set [Beiu-96.2]. Finally,wewould like to point out that a comparativebenchmarking study of quantization e ects on di erent neural network models and the improvements that can be obtained byweight discretization algorithms has not yet been done. The accuracies listed in Table1 to 4 are therefore highly biased by... ..."

### TABLE I: Types of Connections in a Fuzzy Neural Network

1993

Cited by 25

### Table 1. Comparison between fuzzy systems and neural networks

"... In PAGE 3: ... This means we are in a classical situation to apply a neural network. Consider Table1 : using a fuzzy system has obviously some benefits over using a neural network. We can interpret a fuzzy system as a system of linguistic rules.... ..."

### Table 2: Average network size for Fuzzy ARTMAP and network size of Ordered Fuzzy ARTMAP with nclust = (number of classes) + 1

1999

"... In PAGE 12: ... Negative percentages imply that the corresponding Fuzzy ARTMAP generalization performance (worst, average, or best) is better than the Ordered Fuzzy ARTMAP generalization performance. In Table2 , we show the size of the network that Ordered Fuzzy ARTMAP created and the average size of the network that Fuzzy ARTMAP created. It is worth pointing out that the size of the neural network architectures that Ordered Fuzzy ARTMAP creates range between 0.... In PAGE 12: ... This analysis shows that in both cases the number of operations required is O(PT), where the constant of proportionality in the Ordering Algorithm is approximately equal to n2 clust, while the constant of proportionality in Fuzzy ARTMAP is approximately equal to PE e=1 ne, where ne is the average number of categories in Fa 2 during the e-th epoch of training, and E is the average number of epochs needed by Fuzzy ARTMAP to learn the required task. As can be seen in Table2 , there are... ..."

Cited by 6

### Table 5: Fuzzy and neuro-fuzzy software systems.

2003

"... In PAGE 22: ...upports independent rules (i.e., changes in one rule do not effect the result of other rules). FSs and NNs differ mainly on the way they map inputs to outputs, the way they store information or make inference steps. Table5 lists the most popular software and hardware tools based on FSs as well as on merged FSs and NNs methodologies. Neuro-Fuzzy Systems (NFS) form a special category of systems that emerged from the integration of Fuzzy Systems and Neural Networks [65].... ..."

Cited by 2

### Table 5. Performance of neural networks

2005

"... In PAGE 4: ... The training set was broken up as 80% training and 20% cross validation. Table5 reveals the performance of backpropagation and conjugate gradient algorithm for the directional prediction of Microsoft stocks for different number of hidden neurons. Performance of the Mamdani Fuzzy Inference System (FIS) is illustrated in Table 6.... ..."

Cited by 1

### Table 1: Comparison of the proposed algorithm with other fuzzy methods for direct synthesis of fuzzy systems

"... In PAGE 4: ...s 0.0268. Fig 5.a and Fig.5.b present the original and the obtained approximation with 8 fuzzy rules. Table1 compares the results obtained with our algorithm and those obtained with [10]. ESANN apos;2001 proceedings - European Symposium on Artificial Neural Networks... ..."

### Table 4. Fuzzy control rule

"... In PAGE 13: ...In Table4 , NL means #5CNegatively Large quot;, NM means #5CNegatively Medium quot;, NS means #5CNega- tively Small quot;, ZE means #5CZero Equivalence quot;, PS means #5CPositively Small quot;, PM means #5CPositively Medium quot;, PL means #5CPositively Large quot;, S means #5CSmall quot;, M means #5CMedium quot;, and L means #5CLarge quot; #28Kosko, 1992, and Zurada, 1992#29. Note that this rule could be generic for all neural networks using back-propagation learning algorithms.... ..."

### Table 4: Weight discretization in other neural network models.

"... In PAGE 5: ...2 Quantization E ects in Other Neural Network Models Also for other neural network models the e ects of a coarse quantization of the weightvalues on recall and learning have been investigated. The small number of weight discretization algorithms proposed can be partly explained from the fact that the required accuracy for successful learning in these models is lower than for gradient descent learning in multilayer networks ( Table4 ). An interesting example of a hardware implementation is Bellcore apos;s implementation of a Boltzmann machine and Mean-Field learning, whichallows on-chip learning with only 5-bit weights [Alspector-92].... ..."