• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 367,674
Next 10 →

Table 6: Results from support vector machine

in General Terms
by Adrian Schröter
"... In PAGE 6: ...java B.java Figure 4: Comparison of predicted and observed ranking In Table6 (d) we obtained a precision of 0.6671 for the test in version 2.... In PAGE 6: ... recall = correct predicted failures all failures As an example, the recall of the test in version 2.0 shown in Table6 (d) indicates that over two third of the failure-prone components are actually identified as failure-prone. Again, a random guess would have had a probability of 0.... In PAGE 6: ... For example, take the test in version 2.0 of Table6 (b) with a recall of about 0.... In PAGE 6: ...iles. For example, take the test in version 2.0 of Table 6(b) with a recall of about 0.1 and Table6 (d) with a recall of about 0.7.... In PAGE 8: ...g., in Table6 (d), the precision for the top 5% of version 2.1 is substantially higher than the overall precision (90% vs.... In PAGE 8: ...esults from version 2.0 and 2.1 are similar with respect to classifi- cation. Take for example Table6 (d), the recall and precision obtained from testing in version 2.... ..."

Table 1. (a), Point to surface measurements from the results of di erent cardiac segmentation methods. (b), More details on the segmentations obtained by our method, (mean point to surface in mm standard deviation).

in Automated, accurate and fast segmentation of 4D cardiac MR images
by Jean Cousty, Laurent Najman, Michel Couprie, Stéphanie Clément-guinaudeau, Thomas Goissen, Jerôme Garot, Chu Henri Mondor Créteil
"... In PAGE 8: ... In order to evaluate the inter-observer variability the P2S between the two experts is also provided. Table1 b present the mean and standard deviation of these measures at end-diastolic time and end-systolic time. We note that, in all cases of Table 1b, the P2S is less than 1 pixel.... In PAGE 8: ...or the endocardial border and a mean P2S of 1.81mm 0.43 for the epicar- dial border. These results compare favorably with the results obtained by other groups on their own datasets (see Table1 b). Furthermore, the P2S between au- tomatic and manual segmentations is in the same range as the inter-observer P2S: the proposed software produces satisfying segmentations.... ..."

Table 1: Surface reconstruction techniques in point-based rendering.

in Representing and Rendering Surfaces with Points
by J. Krivanek
"... In PAGE 11: ...Table 1: Surface reconstruction techniques in point-based rendering. Table1 gives an overview of different reconstruction techniques, each of which is discussed in a separate section later. All reconstruction techniques share one property: they need to know the density of point samples in order to work.... In PAGE 22: ...3.5 Discussion Each of the described techniques for surface reconstruction ( Table1 ) includes some kind of splatting, at least to eliminate the occluded background surfels. Quad splatting vs.... ..."

Table 3 Discrimination of breast cancer patients from normal controls using machine learning techniques. The mean and SD of five 20-fold cross-validation trials.

in Predictive Models for Breast Cancer Susceptibility from Multiple Single Nucleotide Polymorphisms
by Jennifer Listgarten, Sambasivarao Damaraju, Brett Poulin, Lillian Cook, Jennifer Dufour, Adrian Driga, John Mackey, David Wishart, Russ Greiner, Brent Zanke 2004
Cited by 10

Table 3. Average (across all classes) of sensitivity, speciflcity and the MCC for all predictors on the non-plant data. Sorting Results Non-plant Data Detection Network Sorter Kernel Sensitivity Speciflcity MCC Accuracy

in Detecting and Sorting Targeting Peptides with Neural Networks and
by Support Vector Machines, John Hawkins, Mikael Bodén
"... In PAGE 13: ...777 84.7% Note: See Table3 for details. and recurrent architectures.... ..."

Table 1 Results of point projection onto noiseless point clouds sampled from the B- spline surface in Fig. 2 with increasing density N Iteration Final sub-cloud Time (s) Relative error

in applications in reverse
by Yu-shen Liu A, Jean-claude Paul A, Jun-hai Yong A, Pi-qiang Yu C, Hui Zhang A, Jia-guang Sun A, Karthik Ramani B 2006
"... In PAGE 7: ... The number of points is specified from 10,000 to 300,000. The accuracy and execution time for different CN are given in Table1 . The experimental data shows that the new method has good approximation.... In PAGE 7: ... Table1 lists the size N of each point cloud CN, the number of iterations, the size of the final sub-cloud, the execution time, and the accuracy. All data are the average of projecting 20 equally spaced points onto the given B-spline surface and the corresponding CN.... ..."

Table 2. Classification accuracies of surface normal and point cloud representations for different patch resolutions. First column denotes the number of patches over the facial surfaces and the second column shows the average number of 3D points in each patch.

in Selection and Extraction of Patch Descriptors For 3D Face Recognition
by Berk Gökberk, Lale Akarun 2005
"... In PAGE 7: ... From coarse to fine scale, we have extracted different face segmentations where the numbers of patches used are : 4, 9, 16, 25, 34, 45, 60, 72, 88, 105, 124, 145, 166, 183, 211, 230, 260, 243 and 207. Table2 displays the classification accuracies of surface normal-based and point cloud-based patch descriptors on different patch resolutions. The first col- umn shows the number of local patches formed over the face region and the sec- ond column shows the average number of 3D points at each local patch.... In PAGE 7: ... Figure 4 graphically displays the recognition rates found in Table 2. It is evident by analyzing Table2 that significant dimensionality reduction is... ..."
Cited by 2

Table 2: Total accuracy and kappa of the support vector machine and decision tree classifica- tion or pixel image and segmented data

in CLASSIFYING SEGMENTED MULTITEMPORAL SAR DATA FROM AGRICULTURAL AREAS USING SUPPORT VECTOR MACHINES
by Björn Waske, Sebastian Schiefer
"... In PAGE 4: ... This approach was used successfully in several studies for classifying optical and SAR data (ii,vi,xxv). RESULTS amp; DISCUSSION The accuracy assessment shows the positive effect of image segmentation on the classifica- tion accuracy of the SAR data ( Table2 ). Using an adequate aggregation scale the classifica- tion accuracy is increased.... In PAGE 4: ... A larger sample set can slightly improve the classification accuracy. The accuracy assessment shows that in case of segmented data support vector machines lead to better results than simple decision trees ( Table2 ). The best accuracy of a decision tree is 75.... ..."

Table 1: Test Error Rates on the USPS Handwritten Digit Database.

in Nonlinear Component Analysis as a Kernel Eigenvalue Problem
by Bernhard Schölkopf, Alexander Smola, Klaus-Robert Müller
"... In PAGE 12: ... It simply tries to separate the training data by a hyperplane with large margin. Table1 illustrates two advantages of using nonlinear kernels. First, per- formance of a linear classifier trained on nonlinear principal components is better than for the same number of linear components; second, the perfor- mance for nonlinear components can be further improved by using more components than is possible in the linear case.... ..."

Table 5. Average (across all classes) of sensitivity, speciflcity and MCC for the combined predictors on the non-plant data. Combined Sorters Results Non-Plant Data Networks Sorter Sensitivity Speciflcity MCC

in Detecting and Sorting Targeting Peptides with Neural Networks and
by Support Vector Machines, John Hawkins, Mikael Bodén
"... In PAGE 15: ...837 0.788 Note: See Table5 for details. Table 7.... ..."
Next 10 →
Results 1 - 10 of 367,674
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University