• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 293,325
Next 10 →

Table 1: Results of recovering the synthetic deformations. Experiment Method Mono-modality registration Multi-modality registration

in Symmetric Image Registration
by Peter Rogelj, Stanislav Kovacic 2003
"... In PAGE 8: ... Because the original images MRI-T1 and MRI-PD were in register, measure SMAD and original MRI-T1 image were used also for evaluation of the multi-modal registration results (TB). The results are tabulated in Table1 . In all the cases the symmetric approach performed best with regard to the registration correctness and registration consistency, and in general it was also best according to the image similarity.... ..."
Cited by 5

Table 1 Results of recovering the synthetic deformations. Experiment Method Mono-modality Multi-modality

in Abstract Symmetric Image Registration
by Peter Rogelj, Stanislav Kovačič
"... In PAGE 14: ... Because the original MRI-T1 and MRI-PD images were registered, the measured CC and the original MRI-T1 image were also used to evalu- ate the multi-modality registration results (TB). The results are tabulated in Table1 . In all cases the symmetric approach performed the best in terms of the registration correctness and the registration consistency, while the mea- surement of final image similarity gave similar results for all three registration approaches (considering the average initial image similarity S0 = 0.... ..."

Table 1: The multi-modal features extracted from a shot (person) to be classified Modality Feature Description

in Multi-modality analysis for person type classification in news video
by Jun Yang, Er G. Hauptmann 2005
"... In PAGE 3: ... Our preliminary experiment shows that all the name types have been predicted correctly on our test dataset (see Section 4). Many features are derived from the predicted name types (see Table1 ), among which is the presence (or absence) of reporter names and subject names in a given news story. If a story does not have a reporter apos;s name, shots in that story are unlikely to contain reporters since they rarely appear unnamed.... In PAGE 6: ... There are totally 498 people (or monologue shots) in the test data, among which 247 are news subjects, 186 are anchors, and the rest are reporters. The multi-modal fea- tures used in our approach are summarized in Table1 . All the features have been normalized into the range [0, 1] before being fed into the SVM classifier.... ..."
Cited by 2

Table 10. Results of individual modalities and the multi- modal system.

in Multi-modal Person Identification in a Smart Environment
by Hazım Kemal Ekenel, Mika Fischer, Qin Jin, Rainer Stiefelhagen
"... In PAGE 7: ... It is named as cumulative ratio of correct matches (CRCM) and the weighting model is computed by taking the cumulative sum of the number of correct matches achieved at a confidence difference between the best two matches. In Table10 , the false identification rates of the individual modalities and the multi-modal system are listed. The multi-modal system included in the table uses min-max normalized confidence scores, CRCM modality weighting and the sum rule.... ..."

Table 1: Veri cation results for single modalities In the next section we present three simple classi ers coming from the eld of (statistical) pattern recognition that use the transformation principle presented in Section to perform our multi-modal decision fusion.

in Combining Vocal And Visual Cues In An Identity Verification System Using
by Nn Based, Patrick Verlinde
"... In PAGE 3: ... All the experiments have been performed using the following three experts: a pro le image expert based on a template matching method [4]; a frontal image expert based on a robust correlation method [2]; a vocal expert based on second order statistics [1]. The performances achieved by the three mono-modal identity veri cation systems we have used in our experiments are given in Table1 . The results have been obtained by adjusting the threshold at the EER on the training set and applying this threshold as an a priori threshold on the test set.... In PAGE 4: ...0 0.1 Table 2: Veri cation results for the k-NN classi er Comparing these results with those obtained in Table1 shows the bene ts of combining several individual modalities into a multi-modal system, even when using such a simple fusion module as in this case. It can also be observed that in our typical application the number of neighbors k does not play an im- portant role and the best results are obtained for k=1.... ..."

Table 7: Concept clusters for multi-modality fusion. ID Concepts

in Ontology-enriched semantic space for video search
by Xiao-yong Wei 2007
"... In PAGE 9: ... We empirically set the number of concept clusters as 14 and apply k-means to divide the concepts into 14 partitions. We add in one extra cluster for name entity resulting in 15 concept clusters as listed in Table7 . With the clusters, we learn the relation matrix R in Eqn (11) using the TRECVID 2005 develop- ment set.... ..."
Cited by 2

Table 7 shows the average precisions for multi-modal feature detectors in four semantic concept categories. By comparing the results to the individual visual and audio-based detectors, one can see that in two concepts the multi-modal features provide the best overall result.

in Trecvid 2003 experiments at mediateam oulu and vtt
by Mika Rautiainen, Jani Penttilä, Paavo Pietarila, Kai Noponen, Matti Hosio, Timo Koskela, Satu-marja Mäkelä, Johannes Peltola, Jialin Liu, Timo Ojala, Mediateam Oulu
"... In PAGE 7: ... Table7 . Average precision for the four semantic concepts.... ..."
Cited by 1

Table 1. User veri cation results expressed as equal error rates (%), over three systems (face only, speaker only, and multi-modal fusion), using two di erent face detection scenarios.

in Multi-Modal Face and Speaker Identification for Mobile Devices
by Timothy J. Hazen, Eugene Weinstein, Bernd Heisele, Alex Park, Ji Ming 2006
"... In PAGE 9: ... 3.5 Experimental Results Table1 shows our user veri cation results for three systems (face ID only, speaker ID only, and our full multi-modal system) under two di erent face detection conditions. The results are reported using the equal error rate met- ric.... ..."
Cited by 1

Table 5: Test set results (%) on the multi-modal data using di erent methods. A: auditory data only. V: visual data only. B: Both modalities used.

in Supervised Learning Without Output Labels
by Ramesh R. Sarukkai 1994
Cited by 1

Table 1: Corpora used for various NIST Speaker Recognition evaluations. Abbreviations: lim for limited-data, ext for extended-data, var for limited and extended combined, mm for multi-modal, and p for phase.

in NIST Speaker Recognition Evaluation Chronicles
by Mark Przybocki Alvin, Alvin Martin 2004
"... In PAGE 7: ...) Type of Transmission Training Sides Test Sides Landline 257 580 Cellular 178 361 Cordless 176 219 Other/unknown 5 16 Table 9: Phone transmission types of the training and test conversation sides for the core test condition included in the NIST 2004 evaluation data. Type of Handset Training Sides Test Sides Speakerphone 37 67 Headset 107 116 Ear-bud 42 63 Regular (hand-held) 452 914 Other/unknown 5 16 Table1 0: Phone handset types of the training and test conversation sides for the core test condition included in the NIST 2004 evaluation data. The extended data tests of previous evaluations provided (errorful) word transcripts of the speech data generated by an ASR system.... ..."
Cited by 6
Next 10 →
Results 1 - 10 of 293,325
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University