### Table 1. Information content of the different plenoptic subspaces with regard to the 3D motion estimation prob- lem

2003

"... In PAGE 6: ... 1b). We collected the motion constraint equations for all the plenoptic subspaces in Table1 , and we see that the cam- era that makes the motion estimation problem the easiest is the one that samples the whole plenoptic function (or a multi-perspective 3D slice of it for the case of planar mo- tion) because the motion estimation problem is reduced to a low-dimensional image registration problem as said before. Another important criteria is the range of directions (field of view) of the sensor.... In PAGE 6: ... To simplify the exposition we as- sume that the robot is only able to move on a planar, flat surface, thus the locomotion of the robot is limited to a hor- izontal planar motion, and that the camera designs under study are restricted sets of horizontally aligned pinhole line cameras. As summarized in Table1 in the previous section, we can see that we can extract the 3 planar motion param-... ..."

Cited by 4

### Table 1. Information content of the different plenoptic subspaces with regard to the 3D motion estimation prob- lem

2003

"... In PAGE 6: ... 1b). We collected the motion constraint equations for all the plenoptic subspaces in Table1 , and we see that the cam- era that makes the motion estimation problem the easiest is the one that samples the whole plenoptic function (or a multi-perspective 3D slice of it for the case of planar mo- tion) because the motion estimation problem is reduced to a low-dimensional image registration problem as said before. Another important criteria is the range of directions (field of view) of the sensor.... In PAGE 6: ... To simplify the exposition we as- sume that the robot is only able to move on a planar, flat surface, thus the locomotion of the robot is limited to a hor- izontal planar motion, and that the camera designs under study are restricted sets of horizontally aligned pinhole line cameras. As summarized in Table1 in the previous section, we can see that we can extract the 3 planar motion param- eters directly from the image data if we are able to capture Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003) 2-Volume Set ... ..."

Cited by 4

### Table 1. The proposed fast motion estimation algorithm.

"... In PAGE 2: ... In the following stages, CX BP C2 are used for computing BW CX . The proposed multiresolution motion estimation method algo- rithm is shown is Table1... In PAGE 4: ...0000 4. EXPERIMENTAL RESULTS The two-dimensional version of algorithm of Table1 was used for finding the best match for each block in each frame of video se- quences, from a search conducted in its neighborhood in the pre- vious frame. For the first 30 frames of the gray-scale, 8 bit-per-pixel, BFBIBCA2 BEBKBK , salesman video sequence, with blocks of size BDBI A2 BDBI,and search area of BFBF A2 BFBF (W=16), and D4 BPBE, the proposed method gives an speed up of more than 36 compared to a full search.... ..."

### Table 2: Results of camera motion estimations on 3 synthetic sequences of 200 images. The errors are averaged errors computed over each sequence.

2008

### Table 2 - RMS errors for motion estimation of different

1998

"... In PAGE 19: ... This example illustrates the behavior of integrated region and point tracking under complex imaging conditions. Table2 gives the RMS estimate error produced by our tracking system for several test sequences, including the Park sequence shown in figure 6a. This latter sequence shows high RMS error, which we believe, is due to imaging distortions that occur in the trees as a result of the camera translation.... ..."

### Table 4: Estimation errors on SOFA5. Camera motion is constant on the sequence: it is a translation of direction k (the camera comes close the scene).

2006

### Table 4: Estimation errors on SOFA5. Camera motion is constant on the sequence: it is a translation of direction k (the camera comes close the scene).

2008

### Table 3. Reprojection errors (in pixels) for triangulation and resectioning in the Di- nosaur and Corridor data sets. Dinosaur has 36 turntable images with 324 tracked points, while Corridor has 11 images in forward motion with a total of 737 points.

2006

"... In PAGE 12: ...2 Real Data We have evaluated the performance on two publicly available data sets as well - the dinosaur and the corridor sequences. In Table3 , the reprojection errors are given for (1) triangulation of all 3D points given pre-computed camera motion and (2) resection of cameras given pre-computed 3D points. Both the mean error and the estimated standard deviation are given.... ..."

Cited by 10

### Table 2. The average residual noise (RMS) in the VIS forest sequence for the different sections of the image after correcting for motion induced by the rotation of the polarisation filter. For the sub image correction the image is divided into 9 sub images.

"... In PAGE 4: ... With this motion estimation and correction the residual noise is one of the lowest and in the remaining of this paper, we will be using this motion model for correction and estimation for the MWIR camera. For the VIS sequence, the residual noise for the full and restricted motion models are given in Table2 and Figure 2(a). The residual noise for the edges still is substantially higher than the residual noise of the surfaces.... ..."

### Table 1: Estimated facial animation parameters in comparison to the correct values. reconstruct the sequence by rendering the 3D mo- del. First, only the global motion (head transla- tion and rotation) is estimated and then all FAPs are determined. In Figure 4 the PSNR between the resulting reconstructed image and the original one is plotted over time. The high values of about 70 dB show that the images are nearly identical.

1997

"... In PAGE 4: ... The viewing angle of the camera is about 25o. The result of one such experiment can be seen in Table1 . Ten anima- tion parameters are changed, the others remain constant.... ..."

Cited by 8