### Table 2: Table showing voxelization timings for different ac- celeration techniques.

1996

"... In PAGE 5: ... The percentage of the number of voxels that enter the distance calculation to the number of total voxels is given in column 4, with the time taken to produce the voxel data, in seconds, given in the fi- nal column. In order to show the results of the acceleration techniques over the naive method an additional table ( Table2 ) shows the results for the queen data set which can be regarded as typ- ical. The first line gives the timing for the brute force naive method.... ..."

Cited by 37

### Table 12: Stage 2 from Rotterdam to London for Story Transportation ac- celeration (ShipmentCreated)

2006

### Table 3. Rendering Time Per Frame (Sec- onds): GSO - Gouraud Shading Only, 3DTMN - 3D Texture Mapping (Nearest), 3DTML - 3D Texture Mapping (Linear)

"... In PAGE 6: ... The timings were measured on an SGI Octane workstation with a 195 MHz R10000 CPU and 256 Mbytes of memory without hardware graphics ac- celeration. Table3 reports the average time per frame in seconds for three difference rendering modes. The GSO field in this table is the time taken for rendering the objects using Gouraud shading only, and indicates how complicated is the involved rendering.... ..."

### Table 2: Number of distance computations and wall-clock-time for Na ve k-NN classi cation. Ac- celeration for MT-DFS KNS2 and KNS3 (in terms of num. distances and time).

in Abstract

2005

"... In PAGE 16: ... A na ve implementation with no metric-tree would thus require 0:9n2 distance computations. Table2 shows the computational cost of na ve k-NN both in terms of the number of distance com- putations and the wall-clock time on an unloaded 2GHz Pentium. We then examine the speedups of MT-DFS and our two new methods (KNS2 and KNS3).... ..."

### Table 1: Comparison of RMS deviations Passive Passive Precedence Model-based

"... In PAGE 7: ... The ideal ac- celeration and roll rates are also shown on the graphs so that the dynamic devia- tions caused by the suspension and/or controller dynamics can be clearly seen. Table1 provides a qualitative comparison, presenting the r.... ..."

### Table 7. Reactive vs. active semantics

"... In PAGE 7: ... We also illustrate the performance gains one might expect by applying the reactive semantics of the CubeVM in practice. Table7 compares the execution times for both active and reactive semantics on the benchmark programs. The ac- celeration is also indicated.... In PAGE 8: ... Of course, we expect better performance for the reactive version. Table7 shows the relative performance of the active vs. reactive semantics for the ackermann example.... ..."

### Table 7 Forecasting Changes in Producer Durable Equipment Expenditures

"... In PAGE 18: ... Column [1] presents the results for the accelerator model without any stock return variables, column [2] is the accelerator model combined with market returns, column [3] is the ac- celerator model with auto returns, and column [4] includes all explanatory variables. The results in Table7 con rm the standard result that market returns forecast investment, as does lagged consumption. Auto returns also fore- cast investment, but the combined explanatory power of auto returns and consumption (an adjusted R2 of 32.... ..."

### Table 2. MSE comparison

"... In PAGE 3: ... Our denoising results are comparable to that of the origi- nal non-local algorithm in terms of mean squared error (MSE). Table2 compares the mean squared error (MSE) for differ- ent standard deviation of the added noise between our ac- celerated method and the original algorithm, note that the slightly difference in MSE is due to the use of the Euclid- ean distance when comparing two neighborhood instead of a weighted Euclidean distance in original non-local algorithm [4] ; Fig.... ..."

### Table 2: Overview of impostor rendering systems using static and/or dynamic impostor generation.

"... In PAGE 14: ....7. Summary of Impostor Applications In Section 4 we have described known strategies on how to use impostors for different scenes so that the rendering ac- celerations are high and the impostor updates/memory re- quirements are low. Table2 summarizes these methods. The wide variety shows that the best use of impostors heavily depends on the type of scene and interaction method.... ..."

### Table 1: Terminal-side feature extraction. The absolute performance is a recognition accuracy (1 - recognition rate). The performance relative to Mel-cepstrum is the relative error rate reduction over the MFCC baseline. Weighted averages according to the ETSI-defined protocol are also provided. An overall performance improvement for the 6 different languages is also given: 40,68% reduction of the error rate over the MFCC baseline. The definition of the different test conditions is given in the text of the paper.

in Robust ASR front-end using spectral-based and discriminant features: experiments on the Aurora tasks

"... In PAGE 3: ... It uses the HTK toolkit (Gaus- sian Mixtures Model-Hidden Markov Model - GMM-HMM) recognizer where the models are words, each composed of 16 states, 3 mixture per state, and diagonal covariance matrices. Results are presented in Table1 for the terminal-side fea- ture extraction algorithm (with additional derivatives and ac- celerations coefficients) and in Table 2 when we also use the server-side post-processing. These tables contain the absolute performance for the different databases and test conditions as well as the performance relative to the baseline MFCC.... ..."