Results 1 - 10
of
409,072
Table 2. Scales of the 4-point Daubechies Discrete Wavelet Transform
1993
"... In PAGE 6: ... The DWT coefficients for each running-mean ERP were squared, summed, averaged and plotted as a function of time relative to the stimulus. Each row of graphs represents one scale of the transform beginning with the smallest scales at the top (see Table2 ) and proceeding to the largest scale at the bottom. Each column of graphs corresponds to one electrode site in the order Fz, Cz, Pz, from left to right.... In PAGE 12: ...transform was based on the 4-point Daubechies filters which appeared to be superior to the 20-point filters used in the initial linear regression models. Second, since low frequency information seemed valuable in the linear regression models, the range of the transform was extended, adding a fifth scale ( Table2 ). Third, selection of the coefficients was not performed by the decimation approach taken for the linear regression models.... ..."
Cited by 4
Table 3. Average (across all classes) of sensitivity, speciflcity and the MCC for all predictors on the non-plant data. Sorting Results Non-plant Data Detection Network Sorter Kernel Sensitivity Speciflcity MCC Accuracy
"... In PAGE 13: ...777 84.7% Note: See Table3 for details. and recurrent architectures.... ..."
Table 1: Wavelets properties Wavelet Name Short Name Orthogonality /
"... In PAGE 2: ...1. Filters A variety of standard wavelets was used in the experiments which are summarised in Table1 . It is worthwhile mentioning that although regularity, smoothness and the number of vanishing ... In PAGE 3: ....1. Testing Different Wavelet Transforms In this experiment the image xslow was fused with plow using the full details substitution rule. The fusion was performed for all wavelets presented in Table1 using both decimated and undecimated transforms. Then the fusion performance was evaluated by calculation the RMS between the corresponding products and the reference image.... ..."
Table 1: Test Error Rates on the USPS Handwritten Digit Database.
"... In PAGE 12: ... It simply tries to separate the training data by a hyperplane with large margin. Table1 illustrates two advantages of using nonlinear kernels. First, per- formance of a linear classifier trained on nonlinear principal components is better than for the same number of linear components; second, the perfor- mance for nonlinear components can be further improved by using more components than is possible in the linear case.... ..."
Table 1: True generalization error for Gaussian, Wavelet, Sin/Sinc Kernels with Regular- ization Networks and Support Vector Regression for the best hyperparameters.
"... In PAGE 22: ... This is repeated for a hundred di erent datasets, and the mean and standard deviation of the generalization error are thus obtained. Table1 depicts the true generalization error evaluated on 200 datapoints for the two learning machines and the di erent kernels using the best hyperparameters setting. Analysis of this table leads to the following observation : The di erent kernels and learning machines give comparable results (all averages are within one standard deviation from each other).... In PAGE 23: ... Table 2 summarizes all these trials and describes the performance improvement achieved by di erent kernels compared to the gaussian kernel and sin basis functions. From this table, one can note that : - exploiting prior knowledge on the function to be approximated leads immediately to a lower generalization error (compare Table1 and Table 2). - as one may have expected, using strong prior knowledge on the hypothesis space and the related kernel gives considerably higher performances than gaussian kernel.... ..."
Table 4 Comparison of sparse Bayesian kernel logistic regression, the support vector machine (SVM) and the relevance vector machine (RVM) over seven benchmark datasets, in terms of test set error and the number of representer vectors used. The results for the SVM and RVM are taken from Tipping [14].
"... In PAGE 19: ... It is possible that a greedy algorithm that selects representer vectors so as to maximise the evidence would result in a greater degree of sparsity, however this has not yet been investigated. [ Table4 about here.]... ..."
Table 5. Nutrient Results: means and odds ratios for acute lymphoblastic leukemia in relation to pre-pregnancy maternal diet, 138 matched pairs
2004
Table 3 Discrimination of breast cancer patients from normal controls using machine learning techniques. The mean and SD of five 20-fold cross-validation trials.
2004
Cited by 10
Table 6: Discrete Fourier transform vs. wavelet-packet transform for spare representation example
"... In PAGE 16: ... To motivate the choice of wavelet over Fourier consider the function f(x1; x2) = b0 + b1x1 + b2x2 + b3x1x2, and the associated samples show in Table 5. If we perform a discrete (trigonomic) Fourier and a wavelet-packet transform on the data, we obtain the results presented in Table6 . The wavelet transform is seen to provide a sparser representation of the feature variables, re ecting the orthogonal basis in the feature space.... ..."
Table 1: Notation for wavelet and scaling coe cient vectors
"... In PAGE 4: ... Toward this end, we de ne a discrete wavelet transform operator that takes the vector of sampled measurements, yi, into its wavelet decomposition i = Wiyi = WiTiWT g + Wini i + i (3) where, i consists of a coarsest scale set of scaling coe cients, yi(Li), at scale Li and a complete set of ner scale wavelet coe cients i(m), Li m Mi ? 1, where Mi is the nest scale of representation. In Table1 , we summarize the notation that we will use. For example, for the data yi, the corresponding wavelet transform i = Wiyi consists of wavelet coe cients i(m), Li m Mi ? 1, and coarsest scale scaling coe cients yi(Li).... ..."
Results 1 - 10
of
409,072