• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 165,054
Next 10 →

TABLE 3 Transitions Between First Fixations on Packs and Prices

in Measuring the Value of Point-of-Purchase Marketing with Commercial Eye-Tracking Data
by Pierre Chandon, J. Wesley Hutchinson, Scott H. Young

Table 2 Eye Movement Measures for Experiment 2

in unknown title
by unknown authors
"... In PAGE 5: ...53, p H11005 .001, MSE H11005 2,243 (see Table2 ). The pattern, however, was different from that of Exper- iment 1.... In PAGE 6: ... Spillover. As can be seen in Table2 , there were no overall effects of the preview condition on the first fixation after leaving the target word, F1 H11021 1; F2 H11021 1, and no contrasts reached significance (all Fs H11021 2.... ..."

Table 1. A summary of the observed and predicted eye movements. Plus signs indicate correct predictions.

in Cognitive Strategies and Eye Movements for Searching Hierarchical Computer Displays
by Anthony Hornof, Tim Halverson 2003
"... In PAGE 4: ... The figure gives an idea of the similarities and differences between (a) the observed and the predicted and (b) unlabeled search and labeled search. Table1 summarizes comparisons between the observed and predicted eye movements. The comparisons will be elaborated in this section.... ..."
Cited by 15

Table 1. A summary of the observed and predicted eye movements. Plus signs indicate correct predictions.

in Cognitive Strategies and Eye Movements for Searching Hierarchical Computer Displays
by unknown authors
"... In PAGE 4: ... The figure gives an idea of the similarities and differences between (a) the observed and the predicted and (b) unlabeled search and labeled search. Table1 summarizes comparisons between the observed and predicted eye movements. The comparisons will be elaborated in this section.... ..."

Table 1. A summary of the predicted and observed eye movements. Plus signs indicate correct predictions.

in unknown title
by unknown authors
"... In PAGE 1: ... The figure gives an idea of the similarities and differences between (a) the predicted and the observed and (b) unlabeled search and labeled search. Table1 summarizes comparisons between the predicted and observed eye movements. These data, as well as other aspects of this ... ..."

Table 1. A summary of the predicted and observed eye movements. Pluses indicate correct predictions.

in
by unknown authors
"... In PAGE 3: ... The figure gives an idea of the similarities and differences between (a) the predicted and the observed and (b) unlabeled search and labeled search. Table1 summarizes comparisons between the predicted and observed eye movements which will be elaborated in this section, starting with patterns that persisted across all layouts, not just unlabeled and labeled. Table 1.... ..."

Table 1: Mean errors in rotation and translation of relative eye movements and errors for the hand-eye transformation w.r.t. ground truth for the synthetical data set as well as computation time

in Vector Quantization Based Data Selection for Hand-Eye Calibration
by Jochen Schmidt, Florian Vogt, Heinrich Niemann 2004
"... In PAGE 5: ... In the following, the real data sets are denoted by Real 1 (270 frames) and Real 2 (190 frames), the synthetic one by Synth (108 frames). Table1 shows errors after hand-eye calibration and computation times for the different methods which have been applied to each data set. Since no ground truth is available when calibrating real data, we cannot give errors between the real hand- eye transformation and the computed one.... In PAGE 6: ... The codebook sizes were: 600 (Real 1), 1100 (Real 2), and 500 (Synth). The last three rows of Table1 show the errors for the hand-eye transformation, since for the synthet- ically generated data set ground truth information was available.... ..."
Cited by 1

Table 1. Mean errors in rotation and translation of relative eye movements computed with differ- ent hand-eye calibration methods using structure-from-motion as a basis.

in Calibration-Free Hand-Eye Calibration: A Structure-From-Motion Approach
by Jochen Schmidt, Florian Vogt, Heinrich Niemann
"... In PAGE 6: ... The errors are computed by averaging over a set of randomly selected relative movements. Table1 shows residual errors in translation and rotation as well as the computation times for hand-eye calibration on a Linux PC (Athlon XP2600+) including data selec- tion, but not feature tracking and 3-D reconstruction. The latter steps are the same for all methods, and take approximately 90 sec for tracking and 200 sec for 3-D reconstruction.... In PAGE 7: ... After feature tracking and 3-D reconstruction, different hand-eye calibra- tion methods have been evaluated; in all cases the reconstructed camera movement has been used as eye-data. The results shown in Table1 were computed as follows: DQ, scale sep.: Here, the scale factor was estimated rst by solving (4) and (5).... ..."

Table K-9. Correlations Between General Eye Movement Related Variables

in unknown title
by unknown authors 1999

Table 4 Eye-Movement Measures for the Target Words in

in unknown title
by unknown authors 1998
Cited by 11
Next 10 →
Results 1 - 10 of 165,054
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University