### Table 2 Comparison of Results With Ordinary Least Squares (OLS) and Tobit Regression Models

"... In PAGE 9: ...LP producing an R2 of .22 with training data and .04 with test data (see Table 1). OLS and Tobit Regression Models OLS and Tobit regression models had lower levels of predictive accuracy than did the best performing neutral networks. As indicated in Table2 , R2 for OLS regression was .... In PAGE 10: ...training data and an R2 of .059 on test data (see Table2 ). Although not statistically signifi- cant (w2 = 11:24, p gt;:05), the Tobit regression model generated greater predictive accu- racy than did OLS regression but lagged the best performing ANN by a considerable margin (R2 of .... ..."

### Table 5 Comparison of performance between multiplicative neuron model and a standard multilayer network for daily currency conversion rate difference Method Structure Parameters Training MSE Testing MSE Epochs

2006

"... In PAGE 6: ...target and predicted values of the proposed neuron model. The detail comparison with existing multilayer network is provided in Table5 which clearly indicates that the proposed neuron models over performs existing multilayer neural networks. 5.... ..."

### Table 1: Weight discretization in multilayer neural networks: o -chip learning.

"... In PAGE 4: ... neural network paradigms. A compact overview of a large variety of results on the e ects of limited precision in neural networks can be found in Table1 to 4. These tables list the number of bits that are required for satisfactory (learning) performance and brie y describe the core idea of the algorithms.... In PAGE 4: ... Only the forward propagation pass in the recall phase is performed on-chip whichmakes these quantization e ects amenable for mathematical analysis using a statistical model. Some of the results have been summarized in Table1 which indicate that the accuracy needed in the on-chip forward pass is around 8 bits. In [Pich e-95] a comparison between Heaviside and sigmoidal multilayer networks is given, showing that the weight precision required inaHeaviside network is much higher and even doubles when a layer is added to the network.... In PAGE 6: ...lgorithms with the entropy(number of bits) upper bounds of the data set [Beiu-96.2]. Finally,wewould like to point out that a comparativebenchmarking study of quantization e ects on di erent neural network models and the improvements that can be obtained byweight discretization algorithms has not yet been done. The accuracies listed in Table1 to 4 are therefore highly biased by... ..."

### Table 1. Multilayer composite structures.

### Table 1 Genetic-algorithm encoding of a multilayer neural network spatial interaction model Bits Meaning

1998

"... In PAGE 16: ... On the other hand, using a larger quantity of data to evaluate the CNN may imply that bigger networks could be trained more precisely than smaller ones and, thus, the implicit pruning process would be reluctant to remove links. Encoding Scheme: Table1 illustrates how a string is built. The string representation has several desirable properties.... ..."

### Table 1 X-Ray specular reflection data of PDA multilayers

"... In PAGE 1: ... The experimental set-up used for the low angle diffraction measurements has been described before.7 The results are given in Table1 . They show all the features of a multilayered assembly and can be interpreted as two phases, I and 11, co-existing at the substrate.... In PAGE 1: ... Two different stable structures could be found by minimisation of the potential energy and dynamic simulation at 25 quot;C. Model X-ray data that correspond to the two used experiments which we carried out are given for both structures in Table1 and Fig. 1.... ..."

### Table 1. Three Learned Behavior Network Structures

"... In PAGE 3: ... We expect that consistent structural relationships of different pairs of behaviors, which can express the behavior pattern for the model crayfish in that particular environment, can be discovered. In total 10 structures generated after 10 runs of the learning process, 3 of them are shown in Table1 , in which integer number 1, 2, and 3 represent L (week inhibition), M (medium inhibition), and H (strong inhibition) respectively. From these three structures, some interesting conclusions can be drawn.... In PAGE 4: ...8 2 12345678910 Structure M e a n F i t n es s an d S t an d a r d D e vi at i o n o f F i t n e s s Mean SD Figure2. Fitness of Learned Behavior Networks Figure 2 plots the Fitness of the 100 randomly generated behavior networks for the three structures given in Table1 . We can see that all the fitness values for a particular structure appear within a range without any odds.... ..."

### Table 1 displays time measurements taken while rendering images with the multi-layered hatching technique. This generates images as depicted in Figures 44 - 48 and in Figures 53 - 55.

2006

"... In PAGE 92: ... Table1 : Benchmarks of the multi-layered hatching technique.... ..."

### Table 2 Network Structures Used in the Model

1997

Cited by 34