### Table 2. Neural network performance comparison.

"... In PAGE 5: ... Feedforward multi -layer perceptron (MLP) and Elman networks, with different complexity, were used and tested on a validation set formed by 784 independent samples. Table2 shows the obtained results. Although both networks have similar performances, the Elman recurrent network, with 10 hidden neurons and tan - sigmoidal activation function, exhibits lower training times, converging more rapidly to the desired error value.... ..."

### Table 2: Computational Power of Deterministic and Probabilistic Discrete-Time Analog Neural Networks with the Saturated-Linear Activation Function.

"... In PAGE 17: ...4 de- pends on the descriptive complexity of their weights. The respective results are summarized in Table2 as presented by Siegelmann (1994), including the comparison with the probabilistic recurrent networks discussed in sec- tion 2.... In PAGE 24: ..., 2000). This implies that the results on the computational power of deterministic asymmetric networks summarized in Table2 are still valid for Hopfield nets with an external oscillator of certain type. Especially for rational weights, these devices are Turing universal.... In PAGE 26: ...4 (Siegelmann, 1999b). The results are summarized and compared to the corresponding deterministic models in Table2 . Thus, for integer weights, the results co- incide with those for deterministic networks (see section 2.... In PAGE 39: ... Figure 2). Furthermore, Table2 , summarizing the results concerning the computational power of recurrent neural networks, shows that the only difference between deterministic and probabilistic mod- els is in polynomial time computations with rational weights, which are characterized by the corresponding Turing complexity classes P and BPP. This means that from the computational power point of view, stochasticity plays a similar role in neural networks as in conventional Turing computa- tions.... ..."

### Table 1: Complexity of the recurrent neural network generated by Algorithm 2.

1995

"... In PAGE 24: ... Boldface transitions have been introduced as escape rules from temporary states. According to Table1 , jN j = m+K+X+3n and jVj = m(m+1)+K(n+1)+4X+K+6n. The numbers K and X depends on the original state transition function.... ..."

Cited by 14

### Table 3: Typical terminal, function and root sets for evolving a recurrent neural network

1999

"... In PAGE 4: ... 4.3 Recurrent Neural Networks To evolve a GPN as a recurrent neural network, further changes are needed to the terminal, function and root sets (see Table3 ). Each terminal has an associated weight.... ..."

Cited by 1

### Table 3 : Typical terminal, function and root sets for evolving a recurrent neural network.

### Table 1: Neural models of psychiatric disorders: A summary. Abbreviations used above: ANN - attractor neural network FNN - Feed-forward network RNN - Recurrent neural network ARN - Adaptive resonance network SAN - Spreading activation network

"... In PAGE 25: ... These themes include various approaches to studying the role of synaptic changes in the pathogenesis and clinical manifestations of Alzheimer apos;s disease, the study of spurious attractors as possible neural correlates of schizophrenic positive symptoms, and the ability of feed-forward and recurrent network models to quantitatively model human performance in various cognitive tasks, both in normal subjects and in psychiatric patients. Obviously, the studies reviewed in this paper, summarized in Table1 , represent just a beginning. The models presented here all employ... ..."

### Table 1: Neural models of psychiatric disorders: A summary. Abbreviations used above: ANN - attractor neural network FNN - Feed-forward network RNN - Recurrent neural network ARN - Adaptive resonance network SAN - Spreading activation network

"... In PAGE 25: ... These themes include various approaches to studying the role of synaptic changes in the pathogenesis and clinical manifestations of Alzheimer apos;s disease, the study of spurious attractors as possible neural correlates of schizophrenic positive symptoms, and the ability of feed-forward and recurrent network models to quantitatively model human performance in various cognitive tasks, both in normal subjects and in psychiatric patients. Obviously, the studies reviewed in this paper, summarized in Table1 , represent just a beginning. The models presented here all employ... ..."

### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."

### Table 1: Results of the experiments with neural networks Recurrent neural networks has the best performance for the Reference voltage and FBM data sets. Both data sets represent time series with very fast changing values without long-term trend. The recurrent neural network has worst performance for the time series with trend. In that case the MP and FIR-MP networks better identify the underlying system, as we can see from the results of the experiments for Lorenz and

"... In PAGE 7: ...ection) were used. The best equation was then chosen that minimizes the RMSE on the test training set. IV. 3 Results The results of the experiments for neural networks and equation discovery system Lagramge are given in Table1 and Table 2, respectively. The architecture of the MP neural network is represented with x?y?z, where x, y and z denote numbers of neurons in rst, second and third layer, respectively.... ..."