• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 11,005
Next 10 →

Table 1: Results from Survey of Multimodal Interfaces - Part I: Domain, Input/Output modalities, and Cooperation

in unknown title
by unknown authors 2000
"... In PAGE 13: ... Enhancing the output with synthetic agents is expected to increase the realism and the usability of such translation tools. To illustrate the discussion of technology and evaluation of multimodal applications that is to follow, Table1 and 2 summarize the results of our survey of the literature on multimodal applications relevant to the following aspects: input and output modalities, domain/application, types of modality cooperation, type of multimodal fusion (if applicable), multimodal architecture, qualitative and quantitative evaluation (if performed). The abbreviations used in the table are de ned as follows: Input Modalities: ASR = Automatic Speech Recognition, GR = Gesture Recognition, H = Hand- writing Recognition, P = Pointing, ET = Eye Tracker, GT = Gaze Tracking, K = Keyboard, 3D C = 3D Controller Device, OR = Object Recognition, SV = Speaker Veri cation, FR = Face Recognition Output modalities: GUI = Graphic User Interface, SS = Speech Synthesis, FS = Face Synthesis, A = Audio Feedback, H = Haptic Feedback, VC = Video Conferencing Cooperation: E = Equivalence, C = Complementarity, R = Redundancy, CC = Concurrency, S = Specialization, T = Transfer Architectures: MMI - [323], PAC-Amodeus - [224], OAA - [214] For illustration we brie y describe two multimodal applications for which images were available to the authors.... ..."
Cited by 13

Table 1: Synthesis results for PCFU designs with varying numbers of inputs and outputs.

in Exploring the Design Space of LUT-based Transparent Accelerators
by Sami Yehia, Nathan Clark, Scott Mahlke, Krisztian Flautner 2005
"... In PAGE 7: ... That is, the baseline design already supports multiple outputs, so very little additional complexity is needed to support them. The effects of adding inputs and outputs to a PCFU can be seen in the synthesis results in Table1 . In this table, each PCFU design is represented by a 4-tuple specifying the number of inputs, the number of outputs, the number of supported additions, and what shift values (if any) are supported.... In PAGE 7: ... This essentially means that adding an output should increase the area of the PCFU in a roughly linear fashion, and have a small or no effect on the latency. These trends can be seen in Table1 . Mov- ing from four inputs and two outputs to four inputs and three out- puts increases the area by 0.... In PAGE 7: ...e.g., the carry-ins) having to drive a larger number of cells. One confusing trend in Table1... ..."
Cited by 3

Table 1: Results from Survey of Multimodal Interfaces - Part I: Domain, Input/Output modalities, and Cooperation

in unknown title
by unknown authors
"... In PAGE 14: ... Enhancing the output with synthetic agents is expected to increase the realism and the usability of such translation tools. To illustrate the discussion of technology and evaluation of multimodal applications that is to follow, Table1 and 2 summarize the results of our survey of the literature on multimodal applications relevant to the following aspects: input and output modalities, domain/application, types of modality cooperation, type of multimodal fusion (if applicable), multimodal architecture, qualitative and quantitative evaluation (if performed). In the \Fusion quot; column, the merging of the modalities can be done either at a signal, intermediate or semantic level.... In PAGE 44: ... Add facilities for reviewing of speech output messages for enhancement of long-term retention. Provide facilities for interrupting and resuming [21] spoken output at the beginning of a semantic unit when resuming synthesis which has been interrupted Speech output may lock people out of the interaction for its duration [30] (static visual displays can be sampled when convenient) Table1 0: Summarizing recommendations for interaction Avoid simulaneous video and speech on di erent subjects [21] Speech output can be used to elaborate, summarise, elucidate [7] graphical messages Use speech output features such as speaker apos;s voice to [21] ease the link between the spoken message and other media (i.e.... In PAGE 44: ... If the label is complex, reading speed will be similar to that for speech track to pronounce it Development tool should enable the speci cation of [13] synchronous interaction, asynchronous interaction, ne-grain temporal and spatial relationships (tight [379] and loosely coupled, synchronise at speci c time points) and combinations of each through the use of conjunctive and disjunctive operators. Redundancy between speech recorded messages and [395] textual output may enable faster learning of the interface Redundancy between speech recorded messages and [395] textual output may annoy expert user Redundancy between speech recorded messages and text [186] may bring attention of the user to the textual message Redundancy between speech recorded messages and text [186] may enable the user to achieve a task faster than with speech only but slower than with the text only Redundancy between speech recorded messages and text [186] may drive the user to achieve a task with more errors than with text only or speech only Information provided by speech output should be [21] planned to be presented in another modality if necessary Table1 1: Summarizing recommendations for combination of speech output and other media... In PAGE 47: ...3% 51.7% Table1 2: Results in word error (adapted from [62]) Anthropometric data: Brunelli and Poggio [70] use anthropometric facial features and knowl- edge to extract automatically facial features. Eye-to-eye axis and the distance between the eyes are detected and used to achieve view independency.... In PAGE 57: ...Action Units anger AU2 + AU4 + AU5 + AU10 + AU20 + AU24 disgust AU4 + AU9 + AU10 + AU17 embarrassment AU12 + AU24 + AU51 + AU54 + AU64 fear AU1 + AU2 + AU4 + AU5 + AU7 + AU15 + AU20 + AU25 happiness AU6 + AU11 + AU12 + AU25 sadness AU1 + AU4 + AU7 + AU15 surprise AU1 + AU2 + AU5 + AU26 + rotate-jaw Table1 3: Prototype universal facial expressions of emotions and their corresponding FACS action units Facial expressions constantly accompany human conversation (e.... In PAGE 69: ...Name AU Name AU1 Inner Brow Raiser AU31 Jaw Clencher AU2 Outer Brow Raiser AU32 Lip Bite AU4 Brow Lowerer AU33 Cheek Blow AU5 Upper Lid Raiser AU34 Cheek Pu AU6 Cheek Raiser amp; Lid Compressor AU35 Cheek Suck AU7 Lid Tightener AU36 Tongue Bulge AU8 Lips Toward Each Other AU37 Lip Wipe AU9 Nose Wrinkler AU38 Nostril Dilator AU10 Upper Lip Raiser AU39 Nostril Compressor AU11 Nasolabial Furrow Deepener AU41 Lip Droop AU12 Lip Corner Puller AU42 Slit AU13 Sharp Lip Puller AU43 Eyes Closed AU14 Dimpler AU44 Squint AU15 Lip Corner Depressor AU45 Blink AU16 Lower Lip Depressor AU46 Wink AU17 Chin Raiser AU51 Head Turn Left AU18 Lip Pucker AU52 Head Turn Right AU19 Tongue Show AU53 Head Up AU20 Lip Stretcher AU54 Head Down AU21 Neck Tightener AU55 Head Tilt Left AU22 Lip Funneler AU56 Head Tilt Right AU23 Lip Tightener AU57 Head Forward AU24 Lip Presser AU58 Head Back AU25 Lips Part AU61 Eyes Turn Left AU26 Jaw Drop AU62 Eyes Turn Right AU27 Mouth Stretch AU63 Eyes Up AU28 Lip Suck AU64 Eyes Down AU29 Jaw Thrust AU65 Wall-eye AU30 Jaw Sideways AU66 Cross-eye Table1 4: List of AUs... ..."

Table 1. Input-output combinations

in A Modular Approach for Model-based Dependability Evaluation of a Class of Systems
by Stefano Porcarelli, Felicita Di Giandomenico, Paolo Lollini, Andrea Bondavalli 2004
"... In PAGE 4: ... 2. How a Generic Component Interacts with Others input-output combinations are summarized in Table1 . The input/output parameters characterizing each component are instead summarized in Table 2, where pnoOutCorr and pnoOutIncorr are the probabilities that the output... ..."
Cited by 1

Table 6: Input Output

in Finding structure in time
by Jeffrey L. Elman 1990
Cited by 1175

Table 6: Input Output

in Finding Structure in Time
by Jeffrey L. Elman 1990
Cited by 1175

Table 6: Input Output

in Finding Structure in Time
by Jeffrey L. Elman 1990
Cited by 1175

Table 8: The results for the input{output matrices

in A New Heuristic Algorithm Solving The Linear Ordering Problem
by Stefan Chanas, Przemyslaw Kobylanski
"... In PAGE 11: ...75 0.25 In Table8 we present the results obtained for 38 examples of real problems (input- output matrices), in which the number of objects to be ordered did not exceed m = 60. For each example there are three numbers in the table: OP T - the optimal value of the objective function, opt1 - the value of the objective function obtained in only one trial with a randomly generated initial solution,... ..."

Table 1: Possible input-output pairings

in NONLINEAR CONTROL SYSTEM DESIGN USING VARIABLE COMPLEXITY MODELLING AND MULTIOBJECTIVE OPTIMIZATION
by Valceres V. R. E Silva, Peter J. Fleming, Wael Khatib
"... In PAGE 3: ... These variables can be used to provide closed-loop control of the input vari- ables. Table1 shows the possible combinations of inputs and a0a2a1a4a3a6a5 a1a6a7 a8 a9a11a10a13a12 a14a4a15a16a12 a17a6a18 a19 a7a13a1a6a20a13a17a6a18 a21a4a22 a23 a21a4a22 a24 a25a27a26 a28a6a29a30 a29a31a33a32 a34a33a31a33a35 a35a16a29a28a37a36a6a38a28a33a36 a39a40a33a29a28a33a41a6a42a43a26a33a44a45a33a28 a46a16a36a6a40 a28a37a36a6a40 a47a6a29a28 a48 a44a47a4a49a51a50a16a38a28a33a52 a52a13a26a33a38a28 a53a16a54 a31a33a31a6a29 a53a55a54 a28a56a28a33a45 a57a16a40 a47a4a44a40 a28a58a50a16a38a28a33a52 a52a13a26a33a38a28 a59a33a36a33a41 a44a31 a60 a61a16a54 a36a56a52 a52a63a62a27a36a33a64a13a49a37a34a65a31a6a66 a67a43a26a33a38a68a33a44a40 a28a58a60a16a29a36a65a45a33a28 a67a69a28a6a70a37a54 a28a6a38a36a33a41 a26a33a38a28 a71 a31a33a32a72a50a16a38a28a33a52 a52a13a26a33a38a28 a53a55a54 a31a33a31a6a29 a53a55a54 a28a33a28a33a45 Figure 1: Conceptual SPEY GTE model available measurable outputs to be controlled. Table 1: Possible input-output pairings... ..."

Table 1: Input/Output Trace Features

in Optimizing Input/Output Using Adaptive File System Policies
by Tara M. Madhyastha , Christopher L. Elford, Daniel A. Reed 1996
"... In PAGE 9: ... Table1 shows the features recognized by our trained neural network. These features correspond directly to planes or regions within the space shown in Figure 4.... ..."
Cited by 5
Next 10 →
Results 1 - 10 of 11,005
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University