• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 217,846
Next 10 →

Table 2: A sample data set illustrates clusters embedded in subspaces of a high dimensional space.

in Subspace clustering for high dimensional categorical data
by Guojun Gan 2003
"... In PAGE 2: ... Hence, a good subspace clustering algorithm should be able to find clusters and the maximum associated set of dimensions. Consider, for example, a data set with 5 data points of 6 dimensional(given in Table2 ). In this data set, it is obvious that C = {x1, x2, x3} is a cluster and the maximum set of dimensions should be P = {1, 2, 3, 4}.... In PAGE 3: ...here sj is a vector defined as sj = (Aj1, Aj2, ..., Ajnj)T. Since there are possibly multiple states(or values) for a vari- able, a symbol table of a data set is usually not unique. For example, for the data set in Table2 , Table 3 is one of its symbol tables. BC BS A A A A B B B B C C C C D D D D BD BT Table 3: One of the symbol tables of the data set in Table 2.... In PAGE 3: ... For a given symbol table of the data set, the frequency table of each cluster is unique according to that symbol table. For example, for the data set in Table2 , let (C, P) be a subspace cluster, where C = {x1, x2, x3} and P = {1, 2, 3, 4}, if we use the symbol table presented in Table 3, then the corre- sponding frequency table for the subspace cluster (C, P) is given in Table 4. From the definition of frequency fjr in Equation (6), we have the following equalities: nj CG r=1 fjr(C) = |C|, j = 1, 2, .... ..."
Cited by 4

Table 3.2 High-dimensional datasets used in experimental evaluation

in 3 Probabilistic Semi-Supervised Clustering with Constraints
by Sugato Basu, Mikhail Bilenko, Arindam Banerjee, Raymond Mooney

Table 3.2 High-dimensional datasets used in experimental evaluation

in 3 Probabilistic Semi-Supervised Clustering with Constraints
by Sugato Basu, Mikhail Bilenko, Arindam Banerjee, Raymond Mooney

Table 3.2 High-dimensional datasets used in experimental evaluation

in Mikhail Bilenko
by Sugato Basu, Arindam Banerjee, Raymond Mooney

Table 1: Categories of models of visual cortical maps, and their abbreviations as used in this article. Two versions of the self-organizing map model were investi- gated: SOM-h (high-dimensional weight vectors) and SOM-l (low-dimensional feature vectors).

in Models of Orientation and Ocular Dominance Columns in the Visual Cortex: A Critical Comparison
by E. Erwin, K. Obermayer, K. Schulten 1995
"... In PAGE 8: ... In- creasingly detailed comparisons between model and experimental data will be included along with each point. To ease comparisons, we group models into categories based on similarities in goals or implementation, Table1 . Structural and spectral models attempt to characterize map patterns using schematic drawings or concise equations.... In PAGE 26: ... The results of our comparison between model predictions and experimental data obtained from the upper layers of macaque striate cortex are summarized in table 2. References to articles on each model are given in Table1 . Many of the models are also brie y described in the appendix.... ..."
Cited by 64

Table 19. Test error (in % ), high-dimensional data sets.

in LOCALLY LINEAR EMBEDDING ALGORITHM -- Extensions and applications
by Olga Kayo 2006
"... In PAGE 95: ... Table19 Cont. NMC KNNC LDC QDC natural textures Original 54.... ..."

Table 6: Behavior of V-detector at high dimensionality dimensionality detection rate SD false alarm rate SD number of detectors SD

in D.: Applicability issues of the real-valued negative selection algorithms
by Zhou Ji 2006
"... In PAGE 7: ... We generated the detector set using those points as self samples and then tried to classify 1000 test points that were randomly drawn from the entire hypercube. Table6 shows the results from n = 3 through n = 12.... ..."
Cited by 1

Table 2: High Dimensional Data Sets Dim. No Noise Data Output Noise Data Input/Output Noise Data

in Is Nonparametric Learning Practical in Very High Dimensional Spaces
by Gregory Z. Grudic, Peter D. Lawrence 1997
"... In PAGE 6: ... Each learning set is further divided into 30,000 training examples and 10,000 validation exam- ples: these are used to construct one cascade which forms the approximation for that data set. The 3 right most columns of Table2 contain the test set average relative mean squared error and corresponding standard devia- tions over 10 independent runs. Relative mean squared error is de ned, in the usual sense, as the mean squared error on the test data, divided by the variance of the test data.... In PAGE 6: ... Relative mean squared error is de ned, in the usual sense, as the mean squared error on the test data, divided by the variance of the test data. From Table2 it is evident that the relative error is small when no noise is present. With the addition of noise the relative error approaches the theoretical limit (due to the 3 to 1 signal to noise ratio) of 0.... ..."
Cited by 5

Table 3: F1 and number of Support Vectors for top two Medline queries 5 Conclusions The paper has presented a novel kernel for text analysis, and tested it on a catego- rization task, which relies on evaluating an inner product in a very high dimensional feature space. For a given sequence length k (k = 5 was used in the experiments reported) the features are indexed by all strings of length k. Direct computation of

in Text classification using string kernels
by Huma Lodhi, John Shawe-taylor, Nello Cristianini 2002
Cited by 199

Tables III and IV. The error rate of the proposed IHDR algorithm was compared with some major tree classi ers. CART of [5] and C5.0 of [33] are among the best known classi cation trees3. However, like most other decision trees, they are univariate trees in that each internal node used only one input component to partition the samples. This means that the partition of samples is done using hyperplanes that are orthogonal to one axis. We do not expect that this type of tree can work well in a high dimensional or highly correlated space. Thus, we also tested a more recent multivariate tree OC1 of [10]. We realize that these trees were not designed for high-dimensional spaces like those from the images. Therefore, to fully explore their potential, we also tested the corresponding versions by performing the PCA before using CART, C5.0, and OC1 and called them CART with the PCA, C5.0 with the PCA, and OC1 with the PCA, respectively, as shown in Tables III and IV. Further, we compared the batch version of our HDR algorithm. Originally we expected the batch method to out-perform the incremental one. However, the error rate of the IHDR tree turned out lower than that of the HDR tree for this set of data. A major reason for this is that the same training samples may distribute in different leaf nodes for the IHDR tree because we ran several iterations during training. For the batch version, each training sample can only be allocated to a single leaf node.

in Incremental Hierarchical Discriminant Regression
by Juyang Weng, Wey-shiuan Hwang 2006
Next 10 →
Results 1 - 10 of 217,846
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University