Results 1 - 10
of
22,229
Table 1: Parameters and results of the multi-scale algorithm.
"... In PAGE 9: ...B4CZB5 D6CTCU (in seconds); the number of candidates AC AC CB B4CZB5 AC AC after pruning and merging; and the number AC AC CB B4CZB5 CKCC AC AC of true candidates in that set. Table1 below shows the parameters and results of the test. The minimum length C4 D1CXD2 of candidates to look for was set at 210 pixels (17.... ..."
Table 1: Parameters and results of the multi-scale algorithm.
"... In PAGE 9: ...D6CTCU (in seconds); the number of candidates AC ACCBB4CZB5AC AC after pruning and merging; and the number AC ACCBB4CZB5 CK CC AC AC of true candidates in that set. Table1 below shows the parameters and results of the test. The minimum length C4D1CXD2 of candidates to look for was set at 210 pixels (17.... ..."
Table 5.1: Results with multi-scale feature extraction on IRMA. Multi-scale Probability model Error
TABLE III. Comparison of classification results of the MRF and VZ MR8 classifiers for scaled data. Models are learnt either from the original textures only or the original + scaled textures while classifying both texture types. In each case, the performance of the MRF classifier is at least as good as that using the multi-scale MR8 filter bank.
Cited by 1
Table 2. The effect of using a combination of feature types on test equal error rate. Key: KB = Kadir amp; Brady; MSH = Multi-scale Harris; C = Curves. All models had 6 parts and 40 detection/feature-type/image. Figure in bold is combination automatically chosen by train- ing/validation set.
2005
"... In PAGE 11: ... (b) Test equal error rate versus the number of detections/feature- type/image, N, for 8 part star models. In both cases the combinations of feature-types used was picked for each dataset from the results in Table2 and xed. 3.... ..."
Cited by 56
Table 2: The effect of using a combination of feature types on test equal error rate. Key: KB = Kadir amp; Brady; MSH = Multi-scale Harris; C = Curves. All models had 6 parts and 40 detection/feature-type/image. Figure in bold is com- bination automatically chosen by training/validation set.
2005
"... In PAGE 5: ....3. Heterogeneous part experiments Here we xed all models to use 6 parts and have 40 detections/feature-type/frame. Table2 shows the different combinations of features which were tried, along with the best one picked by means of the training/validation set. We see a dramatic difference in performance between differ- ent feature types.... In PAGE 6: ... (b) Test equal error rate versus the number of detections/feature-type/image, N, for 8 part star mod- els. In both cases the combinations of feature-types used was picked for each dataset from the results in Table2 and xed. 3.... ..."
Cited by 56
Table 2: The effect of using a combination of feature types on test equal error rate. Key: KB = Kadir amp; Brady; MSH = Multi-scale Harris; C = Curves. All models had 6 parts and 40 detection/feature-type/image. Figure in bold is com- bination automatically chosen by training/validation set.
2005
"... In PAGE 5: ....3. Heterogeneous part experiments Here we xed all models to use 6 parts and have 40 detections/feature-type/frame. Table2 shows the different combinations of features which were tried, along with the best one picked by means of the training/validation set. We see a dramatic difference in performance between differ- ent feature types.... In PAGE 6: ... (b) Test equal error rate versus the number of detections/feature-type/image, N, for 8 part star mod- els. In both cases the combinations of feature-types used was picked for each dataset from the results in Table2 and xed. 3.... ..."
Cited by 56
Table 1. Performance comparison of image quality assessment models on LIVE JPEG/JPEG2000 database [13]. SS-SSIM: single-scale SSIM; MS-SSIM: multi-scale SSIM; CC: non-linear regression correlation coefficient; ROCC: Spearman rank-order correlation coefficient; MAE: mean absolute error; RMS: root mean squared error; OR: outlier ratio.
2003
Cited by 13
Table 1. Performance comparison of image quality assessment models on LIVE JPEG/JPEG2000 database [13]. SS-SSIM: single-scale SSIM; MS-SSIM: multi-scale SSIM; CC: non-linear regression correlation coefficient; ROCC: Spearman rank-order correlation coefficient; MAE: mean absolute error; RMS: root mean squared error; OR: outlier ratio.
2003
Cited by 13
Table 3. Reconstruction error at different multi-scale levels (Tleaf BP 100, q BP 5%).
in Multi-Scale Reconstruction of Implicit Surfaces with Attributes from Large Unorganized Point Sets
2004
"... In PAGE 9: ... Table3 shows the two error measures at 10 different lev- els compared to the highest resolution reconstruction for three models. Figures 10 and 11 show the visual quality of our multi- scale reconstruction method at different levels.... ..."
Cited by 1
Results 1 - 10
of
22,229