### Table 1: Average error for interpolation based on two slices.

"... In PAGE 10: ...etween a scanned image and an interpolated one. Naturally, the absolute error, i.e. least deviation from the original image plays a role, but from the applicational point of view the structure of the error distribution is more important. Table1 shows the mean absolute errors for interpolation at di erent heights based on two slices of data at heights 0 cm and 2 cm, respectively. While the mean itself is relatively small, the variance of the errors at each point in the image indicates that interpolation quality decreases with the distance from the data slices.... ..."

### Table 1: Average error for interpolation based on two slices.

"... In PAGE 10: ...etween a scanned image and an interpolated one. Naturally, the absolute error, i.e. least deviation from the original image plays a role, but from the applicational point of view the structure of the error distribution is more important. Table1 shows the mean absolute errors for interpolation at different heights based on two slices of data at heights 0 cm and 2 cm, respectively. While the mean itself is relatively small, the variance of the errors at each point in the image indicates that interpolation quality decreases with the distance from the data slices.... ..."

### Table 2: Timings for various stages of the frame computation. Rendering is performed using ray tracing and the indirect lighting (computed and fil- tered in the photon tracing stage) is interpolated based on the values stored in mesh vertices. All timings are given in seconds and are measured on a Pentium 4, 3.2 GHz processor.

2004

"... In PAGE 8: ...2 GHz processor. Table2 summarizes timings obtained for all four tested scenes. As can be seen the overhead incurred by spatio-temporal bilateral filtering is quite signif- icant and directly depends on the number of poly- gons.... ..."

Cited by 2

### Table 6.2: Pseudo-code of the interpolated-based algorithm for motion blur. The procedure InterpolateRad(k; k + 1; s; i) returns the radiosity of patch i of a sample s between frames k and k+.

### Table 1 lists the recall, precision, and F-measure re- sults obtained when tested on the 1017 utterance DARPA Communicator test set. The baseline is the unadapted HVS parser trained on the ATIS corpus only. The in- domain results are obtained using the HVS parser trained solely on the 10682 DARPA training data. The other rows of the table give the parser performance using MAP and log-linear interpolation based adaptation of the baseline model using 50 randomly selected adaptation utterances.

2004

"... In PAGE 6: ...2 indi- cate that the F-measure needs to exceed 85% to give ac- ceptable end-to-end performance (see Figure 3). There- fore, it can be inferred from Table1 that the unadapted ATIS parser model would perform very badly in the new Communicator application whereas the adapted models would give performance close to that of a fully trained in-domain model. Figure 4 shows the parser performance versus the num- ber of adaptation utterances used.... ..."

Cited by 5

### Table 5 shows the results for the hybrid interpolant based on the triangulations 3, 4, 5. For each `, the needed derivatives were estimated based on values of f at the vertices of ` itself, using only the 15 closest points for each estimate. The columns marked E1; E2; E1 give the relative errors measured by E` p = e` p=kfk1, where

1996

"... In PAGE 17: ...651 (-5) 1.041 (-5) Table5 . E ect of Estimated Derivatives on Hybrid Interpolation.... ..."

Cited by 3

### Table 5. E ect of Estimated Derivatives on Hybrid Interpolation. In some applications we may have a large set of data points U, but we want to construct an interpolant based on a triangulation with fewer vertices. In this case we can select a subset V of U to use as vertices for the triangulation on which the hybrid interpolant is based, but continue to use all of the points of U in estimating derivatives. To get an idea of how this works, we again consider interpolating the function f in (5.1). We choose V to be the set of 66 vertices of the triangulation 3 in Sect. 5. To see what happens when more data is available to estimate derivatives, we constructed sets Un from V by adding an addition n ? 66 data points, chosen randomly on the sphere. Table 6 shows the results for n = 66; 100; 150; 200. For comparison purposes, we also list the errors obtained using exact derivatives in the row labelled n = 1.

1996

"... In PAGE 17: ... 6, using the triangle constructed by the automatic procedure described there. Table5 shows the results for the hybrid interpolant based on the triangulations 3, 4, 5. For each `, the needed derivatives were estimated based on values of f at the vertices of ` itself, using only the 15 closest points for each estimate.... ..."

Cited by 3

### Table 2 summarizes the execution times for each stage in the processing of this environment and computing a 30 frame walkthrough. For this animation, we used a quality q of .95, which is very high. The total time required was 2363.5 minutes. For an animated environment, the additional cost per frame is higher than the static environment, since the time interpolation of base images is required. The walkthrough sequence demonstrates that the user can walk through the moving environment and observe changes in the shiny surfaces and the indirect illuminationas well as the overall object motion. Figure 9 shows images from an early, lower quality (the quality q was .7) version of the animation. Figures 9a and 9b show the indirect solutions. Figure 9c shows an intermediate solution in which both illumination and object positions have been interpolated.

1995

"... In PAGE 12: ... Table2 . Execution times for the Cornell box environment.... ..."

Cited by 12

### Table 4.5). But observing Figures 4.3 and 4.4 one can clearly see that the interpolant based on the data dependent functional exhibits much smoother distribution of curvature. Comparing the run-time behavior we nd that the data dependent optimization needs more computing time than the data independent optimization and the TPS- and OMM-method9. This is not a surprising result since for the data dependent functional the value of the related inner product of the basis functions hBi;jjBk;li depends on the reference surface and must be determined using numerical integration. However, depending on the speci c application, the better quality of the resulting surface will justify the higher costs.

### Table 4: System performance for various configurations.

2005

"... In PAGE 22: ... Finally, it is clear that the system is vastly superior to the GMM system without any normalization. The metrics of interest for this system are reported in Table4 . The poor performance in the high false-alarm region is believed to be a result of LNKnet optimizing for DCF, and this phenomena is discussed in Section 8.... In PAGE 22: ... However, the sinc interpolation-based warping faired slightly better than the rest except in the high false-alarm region. In addition, examination of Table4 reveals that using 256 Gaussian components tends to be superior to 128. The sinc interpolation-based warping with 256 Gaussian components resulted in the best EER for the system: 1.... ..."