• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 37,329
Next 10 →

Table 5: Results on unsupervised learning

in Research Track Paper A Learning Framework using Green’s Function and Kernel Regularization with Application to Recommender System
by Chris Ding, Tao Li, Rong Jin, Horst D Simon
"... In PAGE 7: ...(27). The results are given in Table5 . For 4 out of 7 datasets, the accuracy values are improved.... ..."

Table 3: Comparisons for supervised methods Although the SGNT/SGNN method was mainly developed for the purpose of unsupervised learning and the class information is only used to assign weights to the attributes (never used in training process), Table 3 shows that the performance of SGNT is still quite impressive. The speed comparisons in Table 4 show again the SGNT method is much faster than any other methods for supervised learning. The time spent by calculating the rst and second order information gains for all three MONK apos;s problems are the same, 0.2 second, and is not included in the training time given in Table 4.

in Some Performance Comparisons for Self-Generating Neural Tree
by W. X. Wen, A. Jennings, H. Liu, V. Pang
"... In PAGE 4: ...0% Reich amp; Fisher Table 1: Accuracy comparisons for unsupervised learning methods the harder task M2, the performance of SGNT is much better than the others and so is the average performance of SGNT. Actually, the performance of SGNT for the harder problem is even better than many popular supervised learning methods (see Table3 ). As for the training speed, SGNT is signi cantly faster than its competitors.... ..."

Table 3. Committee-Based Unsupervised Learning

in Scaling to Very Very Large Corpora for Natural Language Disambiguation
by Michele Banko, Eric Brill
"... In PAGE 6: ... The classifiers were then retrained using the labeled seed corpus plus the new training material collected automatically during the previous step. In Table3 we show the results from these unsupervised learning experiments for two confusion sets. In both cases we gain from unsupervised training compared to using only the seed corpus, but only up to a point.... ..."

Table 3. Unsupervised learning result on CUCS dataset

in Anomalous Payload-based Network Intrusion Detection
by Ke Wang, Salvatore J. Stolfo 2004
"... In PAGE 17: ...Table3 . We used an unclustered single-length model since the number of training examples is sufficient to adequately model normal traffic.... ..."
Cited by 107

Table 4.10. Effect of Unsupervised Learning

in Part-of-Speech Tagging: a Machine Learning Approach Incorporating Diverse Features ∗
by Tetsuji Nakagawa, Tetsuji Nakagawa 2006

Table 2: Accuracy comparisons for unsupervised learning methods.

in Joint Concept Formation
by Huan Liu, Wilson X. Wen
"... In PAGE 15: ...% misclassi cations, i.e. noise in the training set. The comparison results on predictive accuracy is shown in Table2 . CLASSWEB is a combination of the algorithms COBWEB [Fisher, 1987] and CLASSIT [Gennari et al.... ..."

TABLE I GA AND PSO PARAMETERS FOR UNSUPERVISED LEARNING

in Parallel Learning in Heterogeneous Multi-Robot Swarms
by Jim Pugh, Alcherio Martinoli

Table 1: GA and PSO Parameters for Unsupervised Learning

in Algorithms, Experimentation
by unknown authors

Table 2: Experimental results comparing unsupervised learning of network structure.

in Discretizing Continuous Attributes While Learning Bayesian Networks
by Nir Friedman, Moises Goldszmidt 1996
"... In PAGE 8: ... This procedure is un- supervised, it does not distinguish the class variable from other variables in the domain.4 Table2 contains the results of this experiment: unsup(LS) denotes the unsupervised learning method described in Section 4.3, where the initial discretization was performed by least square quantization as described in Section 4.... ..."
Cited by 43

Table 2: Experimental results comparing unsupervised learning of network structure.

in Discretizing continuous attributes while learning bayesian networks
by Nir Friedman 1996
"... In PAGE 8: ... This procedure is un- supervised, it does not distinguish the class variable from other variables in the domain.4 Table2 contains the results of this experiment: unsup(LS) denotes the unsupervised learning method described in Section 4.3, where the initial discretization was performed by least square quantization as described in Section 4.... ..."
Cited by 43
Next 10 →
Results 1 - 10 of 37,329
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University