• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 90,079
Next 10 →

Table 2. Sample stimuli used in the on-line cross-modal naming experiments.

in Working With Limited Memory: Sentence Comprehension In Alzheimers Disease
by Daniel Kempler, Amit Almor, Maryellen C. Macdonald, Elaine S. Andersen
"... In PAGE 7: ... Here we report only the sensitivity effects for the two populations (separately for grammatical, semantic and discourse items); these were calculated by subtracting the mean normalized response time (for all subjects from each population) for good continuations from the mean response times for bad continuations. Examples of the stimuli from each of the three constraint types (grammatical, semantic, and discourse) are shown Table2 . A detailed presentation of the methodology and the tasks can be found in Kempler et al.... In PAGE 8: ... Grammatical Constraints Forty grammatical sentences were constructed and then altered to create ungrammatical counterparts by the substitution, addition or deletion of one word. Two types of grammatical violations were included: (1) subject noun-verb agreement errors and (2) errors of transitivity (see Table2 ). Preliminary analysis of the subject-verb and transitive sentence structures did not demonstrate any effect due to construction type (F lt; 1), so the data from both constructions were combined for analysis.... In PAGE 8: ... In half of the sentences, the final word was grammatically and semantically appropriate. In the other half, the final word was anomalous due to a semantic (or pragmatic) violation ( Table2 ). The procedure and subject populations were identical to the grammatical experiment described above.... In PAGE 11: ... Andersen, 1998). Short subject-verb agreement stimuli were taken from the subject-verb agreement items described above ( Table2 ) and long stimuli were constructed by adding an intervening 10-15 word clause between the subject and the verb. For instance, the subject-verb agreement item The young girl was/*were was transformed into a longer item The young girl, who improved greatly every day because of the excellent teaching and good books was/*were.... In PAGE 13: ... This is also true for the on-line discourse items: in order to determine whether the pronoun him or them was coherent in the discourse, patients would have to understand the entire discourse. However, full processing of the sentences was not necessary for the on-line grammar and semantic items: in the case of our on-line grammatical and semantic items ( Table2 ), patients could perform well on the task (i.... In PAGE 15: ...asks. We also do not think that on-line tasks are inherently preferable. For example, we suspect that our on-line cross-modal naming task may be very unnatural, insofar as it requires substantially less processing than normal language comprehension. That is, in the case of our short grammatical and semantic items in the on-line experiments ( Table2 ), patients could perform well on the task by attending to only minimal information in the sentence concerning the relationship between verbs and nouns. As stated in the body of this chapter, simple grammatical relations (e.... ..."

Table 2. Comparison of different cross-modal association methods for the retrieval of explosion images. In the above experiments on cross-modal retrieval, only 8 most important feature dimensions in the transformed feature space are used. In other words, we only need 16 features per image, 8 of which are visual features and the rest are corresponding audio features. Comparing to the original data volume and feature dimension size, there is a significant save in space for the database.

in Multimedia content processing through cross-modal association
by Dongge Li 2003
Cited by 10

Table 7: Example of Combinations in Cross-modal Expressive Strength

in Cross-modal Coordination of Expressive Strength between Voice and Gesture for Personified Media
by Tomoko Yonezawa, Noriko Suzuki
"... In PAGE 6: ... Procedures and Conditions (Video Stimulus): The video stimuli include two types of combinations of expressive strength: (i) pose changes from 0 to 1 when the singing voice changes from 0 to 1; and (ii) the reverse order of (i) with respect to the singing voice. Considering the order effect, we also prepared video stimuli in which: (iii) pose changes from 1 to 0 when the singing voice changes from 1 to 0; and (iv) the reverse order of (iii) with respect to the singing voice ( Table7 ). These stimuli are understood to reflect changes in expressive strength along the diagonal slopes of Figs.... ..."

Table 3. Accuracy of different talking head analysis methods. For both cross-modal retrieval and talking head analysis, the feature used for CFA can be either aligned pixel intensities or eigenface values. CCA can only use eigenface values as input due to its limitation mentioned in Section 4. Our experiments on CFA method show that the use of eigenfaces slightly degrades the performance

in Multimedia content processing through cross-modal association
by Dongge Li 2003
"... In PAGE 7: ... Two different types of processing are used for each of the methods discussed earlier: off-line supervised training where the transformation matrices are generated before hand using groundtruth data, and on- line dynamic processing where the transformation matrices are generated on the fly using the input testing video directly. Table3 provides the performance comparison of different methods. Overall, CFA achieves 91.... ..."
Cited by 10

Table A2. Cross-modal prefrontal population descriptive statistics, by memoranda

in VARIABILITY IN NEURONAL ACTIVITY IN PRIMATE CORTEX DURING WORKING MEMORY TASKS
by M. Shafi, A Y. Zhou, B J. Quintana, A C. Chow, J. Fuster, M. Bodner, Bdepartment Of Neurosurgery

Table 4. Part of the cross-modality matrix (initial activity of step 1b)

in 6th ERCIM Workshop "User Interfaces for All " Long Paper A Structured Contextual Approach to Design for All
by Chris Stary

Table 1: Example primes and targets for Experiment 2 - Cross-modal priming. Primes and continuations following the sentence: The soldier saluted the flag with his...

in Ambiguity and Competition in Lexical Segmentation
by Matt Davis William, William D. Marslen-wilson, M. Gareth Gaskell 1997
Cited by 3

Table 1: Example primes and targets for Experiment 2 - Cross-modal priming. Primes and continuations following the sentence: The soldier saluted the flag with his...

in Ambiguity and Competition in Lexical Segmentation
by Matt Davis William, William D. Marslen-wilson, M. Gareth Gaskell 1997
Cited by 3

Table 1. Comparison of Uni-modal and Cross Modal Semantic Video Concept Detection Results

in LETTER Cross-Modal Learning- The Learning Methodology Inspired by Human’s Intelligence 1
by Bo Zhang, Dayong Ding, Ling Zhang 2006

Table 1: Performance results on word segmentation, word discovery, and semantic accuracy averaged across six speakers. Results shown for cross-modal learning using CELL, and acoustic-only learning. Segmentation Accuracy (M1) Word Discover (M2) Semantic Accuracy (M3)

in Learning visually grounded words and syntax of natural spoken language
by Deb Roy 2001
"... In PAGE 11: ... This acoustic-only model may be thought of as a rough approximation to a minimum description length approach to finding highly repeated speech patterns which are likely to be words of the language. Results of the evaluation shown in Table1 indicate that the cross-modal algorithm was able to extract a large proportion of English words from this very difficult corpus (M2), many associated with semantically correct visual models (M3). Typical speech segments in the lexicons included names of all six objects in the study, as well as onomatopoetic sounds... ..."
Cited by 8
Next 10 →
Results 1 - 10 of 90,079
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University