• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 409,072
Next 10 →

Table 2. Scales of the 4-point Daubechies Discrete Wavelet Transform

in Linear and Neural Network Models for Predicting Human Signal Detection Performance From Event-Related Potentials: A Comparison of the Wavelet Transform With Other Feature Extraction Methods
by Leonard J. Trejo, Mark J. Shensa 1993
"... In PAGE 6: ... The DWT coefficients for each running-mean ERP were squared, summed, averaged and plotted as a function of time relative to the stimulus. Each row of graphs represents one scale of the transform beginning with the smallest scales at the top (see Table2 ) and proceeding to the largest scale at the bottom. Each column of graphs corresponds to one electrode site in the order Fz, Cz, Pz, from left to right.... In PAGE 12: ...transform was based on the 4-point Daubechies filters which appeared to be superior to the 20-point filters used in the initial linear regression models. Second, since low frequency information seemed valuable in the linear regression models, the range of the transform was extended, adding a fifth scale ( Table2 ). Third, selection of the coefficients was not performed by the decimation approach taken for the linear regression models.... ..."
Cited by 4

Table 3. Average (across all classes) of sensitivity, speciflcity and the MCC for all predictors on the non-plant data. Sorting Results Non-plant Data Detection Network Sorter Kernel Sensitivity Speciflcity MCC Accuracy

in Detecting and Sorting Targeting Peptides with Neural Networks and
by Support Vector Machines, John Hawkins, Mikael Bodén
"... In PAGE 13: ...777 84.7% Note: See Table3 for details. and recurrent architectures.... ..."

Table 1: Wavelets properties Wavelet Name Short Name Orthogonality /

in Assessment Of The Method-Inherent Distortions
by In Wavelet Fusion, Vladimir Buntilov, Timo Bretschneider
"... In PAGE 2: ...1. Filters A variety of standard wavelets was used in the experiments which are summarised in Table1 . It is worthwhile mentioning that although regularity, smoothness and the number of vanishing ... In PAGE 3: ....1. Testing Different Wavelet Transforms In this experiment the image xslow was fused with plow using the full details substitution rule. The fusion was performed for all wavelets presented in Table1 using both decimated and undecimated transforms. Then the fusion performance was evaluated by calculation the RMS between the corresponding products and the reference image.... ..."

Table 1: Test Error Rates on the USPS Handwritten Digit Database.

in Nonlinear Component Analysis as a Kernel Eigenvalue Problem
by Bernhard Schölkopf, Alexander Smola, Klaus-Robert Müller
"... In PAGE 12: ... It simply tries to separate the training data by a hyperplane with large margin. Table1 illustrates two advantages of using nonlinear kernels. First, per- formance of a linear classifier trained on nonlinear principal components is better than for the same number of linear components; second, the perfor- mance for nonlinear components can be further improved by using more components than is possible in the linear case.... ..."

Table 1: True generalization error for Gaussian, Wavelet, Sin/Sinc Kernels with Regular- ization Networks and Support Vector Regression for the best hyperparameters.

in Frame, Reproducing Kernel, Regularization and Learning
by Alain Rakotomamonjy, Stéphane Canu
"... In PAGE 22: ... This is repeated for a hundred di erent datasets, and the mean and standard deviation of the generalization error are thus obtained. Table1 depicts the true generalization error evaluated on 200 datapoints for the two learning machines and the di erent kernels using the best hyperparameters setting. Analysis of this table leads to the following observation : The di erent kernels and learning machines give comparable results (all averages are within one standard deviation from each other).... In PAGE 23: ... Table 2 summarizes all these trials and describes the performance improvement achieved by di erent kernels compared to the gaussian kernel and sin basis functions. From this table, one can note that : - exploiting prior knowledge on the function to be approximated leads immediately to a lower generalization error (compare Table1 and Table 2). - as one may have expected, using strong prior knowledge on the hypothesis space and the related kernel gives considerably higher performances than gaussian kernel.... ..."

Table 4 Comparison of sparse Bayesian kernel logistic regression, the support vector machine (SVM) and the relevance vector machine (RVM) over seven benchmark datasets, in terms of test set error and the number of representer vectors used. The results for the SVM and RVM are taken from Tipping [14].

in The Evidence Framework Applied to Sparse Abstract Kernel Logistic Regression
by Gavin C. Cawley, Nicola L. C. Talbot
"... In PAGE 19: ... It is possible that a greedy algorithm that selects representer vectors so as to maximise the evidence would result in a greater degree of sparsity, however this has not yet been investigated. [ Table4 about here.]... ..."

Table 5. Nutrient Results: means and odds ratios for acute lymphoblastic leukemia in relation to pre-pregnancy maternal diet, 138 matched pairs

in Maternal Dietary Risk Factors in Childhood Acute Lymphoblastic Leukemia
by United States Christopher, Christopher D. Jensen, Gladys Block, Patricia Buffler, Xiaomei Ma, Steve Selvin, Stacy Month 2004

Table 3 Discrimination of breast cancer patients from normal controls using machine learning techniques. The mean and SD of five 20-fold cross-validation trials.

in Predictive Models for Breast Cancer Susceptibility from Multiple Single Nucleotide Polymorphisms
by Jennifer Listgarten, Sambasivarao Damaraju, Brett Poulin, Lillian Cook, Jennifer Dufour, Adrian Driga, John Mackey, David Wishart, Russ Greiner, Brent Zanke 2004
Cited by 10

Table 6: Discrete Fourier transform vs. wavelet-packet transform for spare representation example

in Collective Data Mining: A New Perspective Toward Distributed Data Mining
by Byung-hoon Park, Daryl Hershberger, Erik Johnson, Hillol Kargupta, Hillol Kargupta, Philip Chan, Aaai Press
"... In PAGE 16: ... To motivate the choice of wavelet over Fourier consider the function f(x1; x2) = b0 + b1x1 + b2x2 + b3x1x2, and the associated samples show in Table 5. If we perform a discrete (trigonomic) Fourier and a wavelet-packet transform on the data, we obtain the results presented in Table6 . The wavelet transform is seen to provide a sparser representation of the feature variables, re ecting the orthogonal basis in the feature space.... ..."

Table 1: Notation for wavelet and scaling coe cient vectors

in Wavelet Transforms and Multiscale Estimation Techniques for the Solution of Multisensor Inverse Problems
by Eric Miller And, Eric L. Miller, Alan S. Willsky
"... In PAGE 4: ... Toward this end, we de ne a discrete wavelet transform operator that takes the vector of sampled measurements, yi, into its wavelet decomposition i = Wiyi = WiTiWT g + Wini i + i (3) where, i consists of a coarsest scale set of scaling coe cients, yi(Li), at scale Li and a complete set of ner scale wavelet coe cients i(m), Li m Mi ? 1, where Mi is the nest scale of representation. In Table1 , we summarize the notation that we will use. For example, for the data yi, the corresponding wavelet transform i = Wiyi consists of wavelet coe cients i(m), Li m Mi ? 1, and coarsest scale scaling coe cients yi(Li).... ..."
Next 10 →
Results 1 - 10 of 409,072
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2016 The Pennsylvania State University