Results 1 - 10
of
209,614
Table 2. Comparative table between continuity concepts in Mixed Reality systems and Multi- platform systems.
Table 1. The Data Sets
2002
"... In PAGE 4: ... In the empirical tests of Section 5 we have used T = 10 evaluation points and W = 10 winner candidates, resulting in a 20-fold speed-up compared to the unwinnowed T -point approximation, but computational time compared to the 1-point approximation was still about 100-fold. 5 Empirical Testing The methods were compared on five different data sets ( Table1 ). The class labels were used as the auxiliary data and the data sets were preprocessed by removing the classes with only a few samples.... ..."
Cited by 4
Table 4: Accuracy by learning method. Learning Method Accuracy
"... In PAGE 3: ...1 Compared to SVM-based Local Classifiers We compared the performance of the parser with a parser based on local SVM classifiers (Johansson and Nugues, 2006). Table4 shows the performance of both parsers on the Basque test set. We see that what is gained by using a global method such as OPA is lost by sacrificing the excellent classifica- tion performance of the SVM.... ..."
Table 1: Comparative results for the numeric font learning problem
"... In PAGE 5: ... The termination condition for all algorithms tested is an error value E #14 10 ,1 . Detailed results regarding the training performance of the algorithms are presented in Table1 , where #16 denotes the mean number of gradient or error function evaluations required to obtain convergence, #1B the corresponding standard deviation, Min=Max the minimum and maximum number of gradient or error function evaluations, and #25 denotes the percentage of simulations that converge to a global minimum. Obviously, the use Table 1: Comparative results for the numeric font learning problem... ..."
Table 1. Notations used in global and local AY-insensitive learning.
"... In PAGE 10: ...2. The meaning of the notation used in this algorithm for local and global learning is summarized in Table1 . The algorithm is performed for given values of AY (the insensitivity parameter) and AS (the regularization parameter).... ..."
Table 3. Active learning vs. Passive learning.
2004
"... In PAGE 12: ... This is because Sc uses data catalog statistics to dif- ferentiate among the selection features, and picks those which can be excluded from the target query with high probability. Table3 compares the performance of both active learning strategies for Sphinx with two experiments in which Sphinx is hobbled to become a passive learning sys- tem. These two experiments do not represent valid strategies but are designed to iden- tify bounds, both lower (oracle) and upper (random) on the number of examples an active learning algorithm may require.... ..."
Cited by 2
Table 2. Learning style preferences across countries
"... In PAGE 6: ...312 Despite being from different parts of the worlds and cultures, the learning styles of these students are not significantly different. Table2 shows a comparison of percentage of learners with a dominant style against data about other studies using ILS in various countries reported in [8]. In general, Table 2 shows that the learning styles of AUS and UMD students are in similar ranges to those from comparable Universities in the US.... In PAGE 6: ... Table 2 shows a comparison of percentage of learners with a dominant style against data about other studies using ILS in various countries reported in [8]. In general, Table2 shows that the learning styles of AUS and UMD students are in similar ranges to those from comparable Universities in the US. Table 2.... ..."
Table 1: Time used for learning
1994
"... In PAGE 13: ... rdt, grdt and grendel 9 learned exactly the same rules, because rdt and grdt always searches the complete hypothesis space and in our domain grendel is complete, too. Only the time they needed, displayed in Table1 , di#0Bers. foil cannot be compared with the other algorithms.... ..."
Cited by 6
Table 2: Oracle learning compared to autoprune
"... In PAGE 13: ... First, where the error of the pruned ANN is similar to that of the oracle trained ANN, and second, when the number of connections of the pruned ANN is similar to that of the oracle trained ANN. Table2 shows the results of the experiment in order of highest accuracy. The top model is the original 128 hidden node ANN also used as the oracle ANN.... ..."
Table 1 The comparison between the global searching results and the learning results
"... In PAGE 5: ...hange. The oscillation of the learning is huge sometime. However, after using the adaptive evolution step, there is not such kind of oscillation. Table1 shows that the learning result is very close to the ideal result done by the exhaust algorithm. It can find the feasible local optimum in much short time.... ..."
Results 1 - 10
of
209,614