• Documents
  • Authors
  • Tables

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 287,889
Next 10 →

Table 4: Anthropometric parameter values for the upper limb.

in Running head: Cerebellar Learning for Arm Movement Control Send all communications concerning the manuscript and requests for reprints to:
by J. L. Contreras-vidaly, Stephen Grossbergz 1997
"... In PAGE 19: ... The product of viscosity and angular velocity is importantinachieving stability of the limb. We used typical estimates of segment masses #28m i #29 and segment lengths #28l i #29 and inertial characteristics from anthropometric data #28see Table4 #29 of Zatsirosky and Seluyanov #281983#29 and Karst and Hasan #281991#29. In our simulations, the shoulder and the elbow are restricted to one rotational degree of freedom #28#0Dexion-extension#29.... ..."

Table 5. Qualitative summary of answers to the open question: Do you think that it is interesting that in human-machine interactions speech synthesisers produce a more human-like voice?.

in Filled pauses in speech synthesis: towards conversational speech.
by Jordi Adell, Antonio Bonafonte, David Escudero
"... In PAGE 7: ... In addition to these five questions an open question has been included in the test. It was: Do you think that it is interesting that in human-machine interac- tions speech synthesisers produce a more human-like voice? and answers to this question have been summarised in Table5 . These comments, thus, support our claim that talking speech synthesis is worth further research.... ..."

Table 1: Equal error rate, human-machine correlation, and cross correlation at the phone level for the two detection meth- ods studied. Weighted averages of the various scoring measures are shown at the bottom.

in Automatic Detection Of Phone-Level Mispronunciation For Language Learning
by Horacio Franco, Leonardo Neumeyer, María Ramos, Harry Bratt 1999
"... In PAGE 3: ... To some degree, a similar but complementary effect was observed for the cross correlation measure; that is, for the phone classes with relatively high detection error rate, rela- tively high cross correlation values could be obtained by just labeling every phone utterance as mispronounced. In Table1 we show the EER, the correlation coefficient, and the cross correlation measure for each phone class and for both detection methods. Weighted averages overall all the phones are also shown.... ..."
Cited by 3

Table 1. A comparison between human and machine reading. Letters in the first column refer to corresponding paragraphs in the text.

in Reading Systems: An introduction to Digital Document Processing
by Lambert Schomaker
"... In PAGE 4: ... Let us take a look at the differences between human and machine reading. In Table1 , a comparison is made between aspects of the human and artificial reading system. We will focus on each of the numbered items in this table.... In PAGE 23: ... In these latter applications, human routine labour can be saved. Summary on human versus machine reading As can be concluded from Table1 , both human and machine read- ing have powerful characteristics and it cannot be denied that the machine has attractive performances in terms of speed and volume, especially in the case of constrained-shape and constrained-content text images. Still, the robustness of the human reading system poses a challenge to system developers.... ..."

Table 5n3a Summary of agreement between human and machine partitioningsn2c using n14

in A Comparison of Human and Machine Assessments of Image Similarity for the Organization of Image Databases Scandinavian Conference on Image Analysis June 9�11 � 1997 � Lappeenranta � Finland
by David Mcg, Squire Thierry Pun
"... In PAGE 6: ... Secondlyn2c and positivelyn2c it seems to indicate that human measures of image similarity are partially learntn3a after prolonged interaction with an image database systemn2c the human begins to judge image similarity in a similar way. Table5 summarizes n14 B for the human and machine partitionings. Using n14 B n2c the agreement between machine and human partitionings is positive in all cases.... ..."

Table 1. Comparison of relative strengths of human and machine in diverse aspects of visual pattern recognition

in Human-computer interaction for complex pattern recognition problems, to appear
by Jie Zou, George Nagy 2005
"... In PAGE 3: ...3 for content-based image retrieval, and Computer Assisted Visual InterActive Recognition (CAVIAR) for visual pattern classification. Our conjectures on what aspects of visual pattern recognition are easy and difficult for humans and computers are set forth in Table1 . The remainder of this chapter attempts to justify some of these conjectures and explores their implications for the design of pattern recognition systems.... ..."
Cited by 1

Table 3. Dysfluencies in human-human and human-machine dialogs and the performance of each classifier in the cascade.

in SEGMENTING SPOKEN LANGUAGE UTTERANCES INTO CLAUSES FOR SEMANTIC CLASSIFICATION
by Narendra K. Gupta
"... In PAGE 4: ... There is enough anecdotal evidence that people speak differently when they know they are talking to a machine. Table3 compares4 5 the occurrence of various phenomena in human-human conversations against human-machine conversations. It is interesting to note that users do not continue sentences across turns, use fewer edits, dis- course markers and explicit edits, but use the coordinating con- junctions and the filled pauses with similar frequencies.... In PAGE 4: ... In previous work, we used the human-human utterances in the Switchboard corpus to evaluate the performance of the clausifier. However, owing to the differences presented in Table3 and the fact that there is no publicly available human-machine utterances annotated with dysfluencies and semantic labels, we are compelled to annotate a corpus of our own. We have annotated a relatively small corpus of 4000 transcribed utterances from human-computer dialogs (HMIHY) with all the dysfluencies, restarts and repairs, segment and clause boundaries.... In PAGE 4: ...1. Performance of Clausifier In Table3 , we present the F-values of the individual classifiers in the cascade shown in Figure 1. Identification of explicit ed- its, discourse markers and coordinating conjunctions is an easier 4In addition to the differences shown in the table, human-human con- versations contain back-channels that are absent in human-machine con- versations.... ..."

Table 29. Upper Limb Fracture, Percent of Claims and Costs

in NSW Motor Accidents Scheme CTP Claim Frequency, Injuries and Costs
by Prepared By The
"... In PAGE 54: ... Impact of Upper Limb Fracture Claims on the Motor Accidents Scheme In total, claims involving an upper limb fracture represented 7% of claims and 13% of the total incurred cost of the scheme. Table29 shows the percentage of claims and percentage of total incurred costs accounted for by all upper limb injury claims each year. Claim Cost As at June 1995, 3,413 upper limb fracture claims were finalised (60%).... ..."

Table 30. Average Payment, Finalised Upper Limb Fracture Claims

in NSW Motor Accidents Scheme CTP Claim Frequency, Injuries and Costs
by Prepared By The
"... In PAGE 54: ... Claim Cost As at June 1995, 3,413 upper limb fracture claims were finalised (60%). Table30 shows the average payment made on these finalised claims according to year of accident and time taken to make the final payment. Table 30.... ..."

Table 2 Statistic errors in motion estimation of the upper limb (units: cm)

in Med Bio Eng Comput (2006) 44:479–487 DOI 10.1007/s11517-006-0063-z ORIGINAL ARTICLE Inertial measurements of upper limb motion
by Huiyu Zhou, Æ Huosheng Hu, Æ Yaqin Tao, H. Zhou, H. Hu, Yaqin Tao
Next 10 →
Results 11 - 20 of 287,889
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University