Results 1 - 10
of
12,012
Table 1 : Properties of Symbolic and Connectionist approaches
"... In PAGE 4: ... Table 1 summarizes a comparative list of properties between symbolic artificial intelligence and neural or connectionist artificial intelligence [14]. Symbolic systems and neural network have many complementary characteristics (see Table1 ) that make them well ... ..."
Cited by 1
Table 1: Comparison of training of connectionist models. Network
"... In PAGE 13: ... However, the adequate rate between the number of sample points required for training and the number of weights in the network has not yet been clearly defined; it is difficult to establish, theoretically, how many parameters are too many, for a given sample size. Table1 summarizes the architecture of the different network models used in this study. The training convergence of MLP and ERNN are illustrated in Figures 6 (a) and (b), respectively.... ..."
Table 1. Recognition performance of the hy- brid connectionist system on the VMail15
2001
Cited by 6
Table 2. Assumptions, strengths and analogies of cognitive neuropsychology and connectionist models
Table 5: The coarse-coded representation used in the repeated experiment. The variety measure for this `supergroup apos; is W( apos;)=16.833862. 5 Distributed representations The representation technique described earlier is localist (only one bit is set in each of the input subvector patterns). The results show that, contrary to the claims of some symbolists, e.g. (Fodor and Pylyshyn, 1988), localist representations can produce an internal representation which implies and supports systematic processing. However, localist representations are frequently criticized by connectionists, who generally prefer distributed representations where many bits can participate in the coding of constituents. The experiment is therefore repeated using coarse-coding (Hinton et al., 1986). The network is modi ed to contain 15-8-15 units. The patterns produced are provided in Table 5. These patterns are used to train a feedforward network (with 0.10 absolute maximum error as convergence criterion). These patterns created the hyperplane groups described in Table 6. The internal representation of the higher-order group 6
Table 3 Performances observed for each odour with the connectionist approach and the discriminant analysis Odour pairs
"... In PAGE 10: ...cores [x2(l,5) = 8.62, P gt; 0.05]. Thus, personal memories did not seem to influence the recognition performances differently, whatever the correctness of the subject apos;s responses. Connectionist approach The performances obtained from the connectionist approach, when all data are taken into account, are depicted in Table3 (complete data on left). The number of cells in this hidden layer varied from 3 to 33 according to the subjects apos; difficulty in discriminating between both odours Memories Figure 7 Frequency in percentage of memories for the nine odorants as a function of the correct (solid line) or incorrect (grey column) recognition scores Due to the very small number of errors for pairs 1, 4, and 5, only the profile obtained when the recognition scores were correct is represented.... In PAGE 10: ... To determine the relative importance of the descriptors in the olfactory performance of recognition memory, intensity, familiarity and hedonic criteria were suppressed. Thus, between 15 and 22 descriptors were submitted to new analyses ( Table3 , reduced data). The number of cells in the hidden layer varied from 4 to 43.... ..."
Table 6: Recognition rates for the difierent connectionist-SCHMM approaches for multiple features.
"... In PAGE 6: ... These were MLP307 for the weighted and MLP506 for the delta cepstrum. The recognition results of this approach ( Table6 ) were rather disappointing. Although, the performance on the training set was slightly improved compared to the weighted cepstrum alone, the performance on the test set was clearly worse.... In PAGE 6: ... The next attempt therefore utilized the best MLPs with context size 1 for each of the two features. This approach drastically improved the recognition performance over the flrst attempt, as can be seen from Table6 , and even outperformed the one-MLP approach by a clear margin. The comparison of the best connectionist-SCHMM approach with the classic HMM systems and the LVQ3 approach is given in Table 7.... ..."
Table 5. Word error rates on eval98 with standard 3-gram and connectionist LMs.
2002
"... In PAGE 4: ...3 to 45.8% without adaptation is obtained with this LM (see Table5 ). Note that a 1% absolute error reduction is not easy to obtain on the HUB5 task eventhough the word error rate is quite high.... ..."
Cited by 9
Results 1 - 10
of
12,012