### Table 1: Neural models of psychiatric disorders: A summary. Abbreviations used above: ANN - attractor neural network FNN - Feed-forward network RNN - Recurrent neural network ARN - Adaptive resonance network SAN - Spreading activation network

"... In PAGE 25: ... These themes include various approaches to studying the role of synaptic changes in the pathogenesis and clinical manifestations of Alzheimer apos;s disease, the study of spurious attractors as possible neural correlates of schizophrenic positive symptoms, and the ability of feed-forward and recurrent network models to quantitatively model human performance in various cognitive tasks, both in normal subjects and in psychiatric patients. Obviously, the studies reviewed in this paper, summarized in Table1 , represent just a beginning. The models presented here all employ... ..."

### Table 1: Neural models of psychiatric disorders: A summary. Abbreviations used above: ANN - attractor neural network FNN - Feed-forward network RNN - Recurrent neural network ARN - Adaptive resonance network SAN - Spreading activation network

"... In PAGE 25: ... These themes include various approaches to studying the role of synaptic changes in the pathogenesis and clinical manifestations of Alzheimer apos;s disease, the study of spurious attractors as possible neural correlates of schizophrenic positive symptoms, and the ability of feed-forward and recurrent network models to quantitatively model human performance in various cognitive tasks, both in normal subjects and in psychiatric patients. Obviously, the studies reviewed in this paper, summarized in Table1 , represent just a beginning. The models presented here all employ... ..."

### Table 1 Set of fingerprints used for training

2003

"... In PAGE 8: ... Though the rings in the modified RWD carry redundant information, we have retained them as they help in identifying variations caused due to plasticity of the skin. A symbolical representation of the data set used for training the neural network is given in Table1 and for testing the generalization is given in Table 2. Eight thumbprints chosen to train the network were such that, four were supposed to be recognized and belonged to the same subject and four were supposed to be rejected and belonged to different subjects.... ..."

### Table 1: Classi cation performance of second-order recurrent neural network.

"... In PAGE 2: ... We extracted from the SWISS-PROT database 873 genuine globin sequences, two thirds of which were used for training and one third with non-globin sequences for testing. Experimental results(illustrated in Table1 )show that the trained network is able to distinguish members of the globin family from non-members with a high degree of accuracy. Table 1: Classi cation performance of second-order recurrent neural network.... ..."

### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."

### Table 1: Complexity of the recurrent neural network generated by Algorithm 2.

1995

"... In PAGE 24: ... Boldface transitions have been introduced as escape rules from temporary states. According to Table1 , jN j = m+K+X+3n and jVj = m(m+1)+K(n+1)+4X+K+6n. The numbers K and X depends on the original state transition function.... ..."

Cited by 14

### Table A.1: Parameters of Dynamic Neural Network Magnetization Model

### Table 2. Neural network performance comparison.

"... In PAGE 5: ... Feedforward multi -layer perceptron (MLP) and Elman networks, with different complexity, were used and tested on a validation set formed by 784 independent samples. Table2 shows the obtained results. Although both networks have similar performances, the Elman recurrent network, with 10 hidden neurons and tan - sigmoidal activation function, exhibits lower training times, converging more rapidly to the desired error value.... ..."

### Table 1: Results of the experiments with neural networks Recurrent neural networks has the best performance for the Reference voltage and FBM data sets. Both data sets represent time series with very fast changing values without long-term trend. The recurrent neural network has worst performance for the time series with trend. In that case the MP and FIR-MP networks better identify the underlying system, as we can see from the results of the experiments for Lorenz and

"... In PAGE 7: ...ection) were used. The best equation was then chosen that minimizes the RMSE on the test training set. IV. 3 Results The results of the experiments for neural networks and equation discovery system Lagramge are given in Table1 and Table 2, respectively. The architecture of the MP neural network is represented with x?y?z, where x, y and z denote numbers of neurons in rst, second and third layer, respectively.... ..."

### Table 1. Fingerprint classi cation rate on the NIST-4 database

2005

"... In PAGE 8: ... The remaining 3,000 ngerprints constitute the independent Test set 1, and the subset of Test set 1 consisting of the last 2,000 ngerprints of the database is termed Test set 2. The classi cation rates obtained on the various data sets are summarized in Table1 , where GED refers to the graph edit distance approach proposed in this paper, MASKS, RNN, and GM refer to graph matching approaches reported in [18] using dynamic masks, recursive neural networks, and graph edit distance, respectively, whereas MLP refers to a non-structural neural network approach [19]. From the experimental results we nd that the proposed method performs clearly better than the best graph matching approach reported in [18].... ..."

Cited by 2