### Table 1: Motion estimation results.

"... In PAGE 6: ... Again, the dashed line corresponds to the expected performance of the algorithm established using Monte Carlo simulation. Table1 summarizes the additional motion estimation results obtained from processing the approach and descent sequences obtained using 50 or 500 features and linear or linear+nonlinear motion estimation For the 50 feature descent sequence and the linear motion estimation algorithm, the average translation error is 0.... In PAGE 6: ... The approach sequence takes slightly longer to process because the larger image requires more time to detect features. The results in Table1 , show that in general the addition of the nonlinear motion estimation algorithm does not improve the results of motion estimation all that much. This is because for vertical descent, the motion computed using the linear algorithm is very constrained, so the results are very close to those obtained using the nonlinear algorithm.... In PAGE 7: ...otions (e.g., orbital motion) the nonlinear algorithm will result in improved motion estimation and should be used. Table1 also shows that adding features (50 vs. 500) does not improve motion estimation all that much.... ..."

### Table 2. The mid-level predicates derived from defor- mation and motion parameter estimates as applied to head motion.

in Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion

1997

"... In PAGE 8: ... These values are mainly dependent on the face size in the image (since it determines the image-motion measurement) and were set empirically from a few se- quences. The mid-level representation that describes the head motions is given in Table2 . The planar model of facial motion is primarily used to stabilize the head motion so that the relative motion of the features may be esti- mated.... ..."

Cited by 114

### Table 1. Performed motion patterns. The estimated pose of the first motion pattern is depicted in Figure 7.

"... In PAGE 9: ....2. Motion Evaluation While in previous work smooth motion with an artificial tar- get in view (Gemeiner and Vincze 20051 Gemeiner et al. 2006) has been demonstrated, the robotic platform used in this work is moving with jitter and the executed motions include trans- lation and rotation as given in Table1 . The translation to the target includes changing of scale of the initial object and also changes in velocity.... In PAGE 12: ... 9. This is the last image frame of the first motion from Table1 . The initial artificial object is out of view and the pose calculation is based only on new features.... ..."

### Table 4: The sensitivity of image-only batch estimation to camera intrinsics calibration errors and to image observation errors. The correct motion can be recovered from image measurements only given synthetic, zero noise observations and the intrinsics used to generate them (row 1). Estimating the motion from the synthetic image observations and a perturbed version of the camera intrinsics that generates the observations results in errors (rows 2-6) are much less than the errors that result from estimating the motion from the unperturbed camera intrinsics and noisy image observations (rows 7-11).

"... In PAGE 22: ... The estimated camera intrinsics, along with the reported standard deviations, are given in Table 3. The results are shown in rows 2-6 of Table4 . The resulting errors in the estimates are on the order of, or less, than the errors that we observe in the batch image-and-inertial estimates from the real image mea- surements.... In PAGE 22: ...0 pixels in each direction, which is the same ob- servation error distribution we have assumed in our experiments. The resulting errors are shown in rows 7-11 of Table4 , and are an order of magnitude larger... ..."

### Table 1. Stereoscopy and motion on real data

1996

"... In PAGE 4: ...tage (i.e., correspondence between image points in the first pair) is assumed given and at any time instant the estimated disparity field t was used for the joint motion and disparity estimation between frames at the next time instant. Results from this phase are summarized in Table1 , where LL refers to the mean square DFD between the two left images, RR to the mean square DFD between the two right images and RL to the mean square DFD between the images of the stereo- scopic pair. For the evaluation of the last quantity only points where Eq.... ..."

Cited by 1

### Table 1. Stereoscopy and motion on real data

1996

"... In PAGE 4: ...tage (i.e., correspondence between image points in the first pair) is assumed given and at any time instant the estimated disparity field t was used for the joint motion and disparity estimation between frames at the next time instant. Results from this phase are summarized in Table1 , where LL refers to the mean square DFD between the two left images, RR to the mean square DFD between the two right images and RL to the mean square DFD between the images of the stereo- scopic pair. For the evaluation of the last quantity only points where Eq.... ..."

Cited by 1

### Table 1. Tradeoff between motion estimation

"... In PAGE 6: ... The bit rate (number of bytes per frame in the encoded video) is also calculated to get an idea of the compression achieved by encoding. Table1 represents tradeoffs of image quality, execution time (power consumption) and bit rate between motion esti- mation algorithm. We use full search, diamond search, three step search, one at a time search, two dimensional logarith- mic search, motion estimation without half-pel search and no motion estimation.... ..."

### Table 1. The proposed fast motion estimation algorithm.

"... In PAGE 2: ... In the following stages, CX BP C2 are used for computing BW CX . The proposed multiresolution motion estimation method algo- rithm is shown is Table1... In PAGE 4: ...0000 4. EXPERIMENTAL RESULTS The two-dimensional version of algorithm of Table1 was used for finding the best match for each block in each frame of video se- quences, from a search conducted in its neighborhood in the pre- vious frame. For the first 30 frames of the gray-scale, 8 bit-per-pixel, BFBIBCA2 BEBKBK , salesman video sequence, with blocks of size BDBI A2 BDBI,and search area of BFBF A2 BFBF (W=16), and D4 BPBE, the proposed method gives an speed up of more than 36 compared to a full search.... ..."

### Table 6. Operation profile for DCT and motion estimation.

2000

"... In PAGE 13: ... We use Hspice to simulate the power con- sumption of each function shown in Table 2 with the worst case scenario, which means each bit of the input switches every clock cycle. The power consumption of the MorphoSys is accordingly estimated based on per- centage of each operation in our application mapping context shown in Table6 . The results show a difference from the worst case by more than a factor of 2.... ..."

Cited by 11

### Table 1: Motion models

"... In PAGE 5: ... Clearly, a 2-D motion model does not uniquely correspond to one 3-D model; identical 2-D motion models may result from di erent assumptions about 3-D motion, surface and camera projection models. Table1 summarizes some parametric models for 2-D motion and provides possible underlying assumptions. The rst four models are illustrated in Fig.... In PAGE 6: ...5 (a) (b) (c) (d) Figure 2: Examples of parametric motion vector elds (sampled) and corresponding motion-compensated predictions of a centered square: (a) translation; (b) a ne; (c) projective linear; and (d) quadratic. See Table1 for model descriptions. pable of describing arbitrary 2-D motion elds.... In PAGE 6: ... O -lattice vectors of the motion eld can be approximated by suitable interpolation of the sampled eld [65]. In general, the interpolation kernel H ( Table1 ) has a small support, such that a motion vector is usually interpolated from at most four samples. The frequently used bilinear inter- polation kernel is a tensor product of horizontal and vertical 1-D triangular kernels.... In PAGE 6: ... Therefore, it can be expected that such elds can be e ciently represented using linear transforms followed by zeroing of high frequency components. For example, the polynomial transform given in the last row of Table1... In PAGE 7: ... To capture these second-order e ects, each motion trajectory must be modeled explicitly. For example, it may be represented by two vectors: instantaneous velocity _ x and acceleration x [13]: x( ) x(t) + _ x(t)( ? t) + x(t) 2 ( ? t)2: (5) Such a temporal modeling can be applied in addition to the spatial modeling described thus far in Table1 . Although representation of motion trajectory elds rather than displacement elds is advantageous in certain applications, larger amounts of motion information must be processed and/or transmitted [13].... In PAGE 8: ...g., a ne; Table1... In PAGE 9: ...2.3 Motion of regions Between the two extremes above, one can nd methods that apply motion models from Table1 to image regions. The motivation is to insure a more accurate modeling (smaller approximation error (6)) of motion elds than in the global motion case and a reduced number of parameters in comparison with the dense motion.... In PAGE 10: ... Thus, a more general image partitioning is neces- sary. The reasoning is that for objects with su ciently smooth 3-D surface and 3-D motion, the induced 2-D motion elds in the image plane can be suitably described by models from Table1 if applied to the area of object projection. A natural image partitioning can be provided by the image acquisition process itself.... In PAGE 12: ... 4.a) for di erent regions of support: (a) block-based (16 16 blocks); (b) pixel-based (globally- smooth as in (17)); and (c,d) region-based with a ne motion model ( Table1 ). For details of the region-based algorithm, see [20].... ..."