• Documents
  • Authors
  • Tables

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 138,382
Next 10 →

Table 2: Spectral feature

in Transportation Information Systems
by K. Varmuza, Kai-tai Fang
"... In PAGE 8: ... 2002). In this work, a set of 400 spectral features have been used as summarized in Table2 (Yoshida et al. 2001).... ..."
Cited by 1

Table 2: Comparison with SBmethod on Max-Cut Problems from the Torus Set Graph SBmethod CirCut (N = 4; M = 100)

in Rank-Two Relaxation Heuristics for Max-Cut and Other Binary Quadratic Programs
by Samuel Burer, Renato D.C. Monteiro, Yin Zhang
"... In PAGE 10: ... We mention that for pm3-8-50 and g3-8, the best cut values so far obtained by CirCut are, respectively, 454 and 41684814, and the latter value is optimal. In Table2 , we present a comparison between the code SBmethod and our code CirCut. SBmethod, a code developed by Helmberg and Rendl [15], solves semide nite programs using a spectral bundle method and, in particular, is one of the fastest codes for solving semide nite... In PAGE 11: ....e., the simple local search feature of CirCut was not invoked. The default parameter settings were used for SBmethod. In Table2 , the cut value and computation time are reported for each problem. For CirCut, the value of M is the number of times Algorithm-1 was run with random starting points, and the value of N is the parameter required by Algorithm-1.... ..."

Table 1: Table of feature point localization accuracy.

in Abstract Feature Points Extraction from Faces
by Hua Gu, Guangda Su, Cheng Du
"... In PAGE 4: ...8 (a). The locating accuracy of the 9 vital feature points of whole 270 people is average over 95%, as shown with Table1 . In the mean time, we test this method on the images collected from real-time dynamic video, also getting very good results and almost reaching the request of practical uses, as shown in Fig.... ..."

Table 6: An example of points and corresponding linear features which induce a local minima in the error function.

in Robust Algorithms for Object Localization
by Aaron S. Wallack, Dinesh Manocha 1998
"... In PAGE 27: ... Gradient descent techniques can fail by returning a local minimum. Table6 shows a set of points and linear feature parameters which induce a local minimum of the error function. The error... ..."
Cited by 5

Table 1: Local Features

in A system for identifying named entities in biomedical text: How results from two evaluations reflect on both the system and the evaluations. In The 2004 BioLink meeting at ISMB
by Shipra Dingare, Malvina Nissim, Jenny Finkel, Christopher Manning, Claire Grover 2005
"... In PAGE 5: ... The full set of local features is outlined in Table 1. Table1 goes here External Resources and Larger Context The features described here comprise various external resources including gazetteers, a web querying technique and relations obtained by parsing. The basic assumption behind and motivation for using external resources is that there are instances in the data where contextual clues do not provide sufficient evidence for confident classification.... ..."
Cited by 6

Table 8: Performance of (point,linear and circular feature) localization technique for ran- domly generated data with = 0.

in Robust Algorithms for Object Localization
by Aaron S. Wallack, Dinesh Manocha 1998
"... In PAGE 42: ...1.3 Results Table8 compares the estimated poses with the actual poses for the randomly generated data sets of points and linear features and circular with perfect sensing ( = 0:0). Table 9 compares the estimated poses with the actual poses for the randomly generated data sets of points and linear features with = 0:1.... ..."
Cited by 5

Table 1. The Data Sets

in Learning More Accurate Metrics for Self-Organizing Maps
by Jaakko Peltonen, Arto Klami, Samuel Kaski 2002
"... In PAGE 4: ... In the empirical tests of Section 5 we have used T = 10 evaluation points and W = 10 winner candidates, resulting in a 20-fold speed-up compared to the unwinnowed T -point approximation, but computational time compared to the 1-point approximation was still about 100-fold. 5 Empirical Testing The methods were compared on five different data sets ( Table1 ). The class labels were used as the auxiliary data and the data sets were preprocessed by removing the classes with only a few samples.... ..."
Cited by 4

Table 1. This table summarizes the various hypergraph learning algorithms, their underlying graph construction and the associated matrix used for the spectral analysis.

in Higher order learning with graphs
by Sameer Agarwal, Kristin Branson, Serge Belongie 2006
Cited by 2

Table 1. This table summarizes the various hypergraph learning algorithms, their underlying graph construction and the associated matrix used for the spectral analysis.

in Higher order learning with graphs
by Sameer Agarwal, Kristin Branson, Serge Belongie 2006
Cited by 2

Table 2: The data sets used in the second set of experiments Data set N. Points N. Features N. Classes

in The Error Entropy Minimization Algorithm for Neural Network Classification
by Jorge M. Santos, Luís A. Alex, Joaquim Marques De Sá
"... In PAGE 4: ... In the following experiments we used three data sets publicly available (Diabetes can be found in [6] and Wine and Iris can be found in the UCI repository of machine learning databases). Table2 contains a summary of the characteristics of these data sets.... ..."
Next 10 →
Results 1 - 10 of 138,382
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University