### Table 4. Approximate (linear simplex) versus exact analysis

"... In PAGE 18: ... Also we computed the maximum and minimum by exploring all the workload mixes. Table4 shows the maximum response time for classes 1 and 2 (columns 2 and 5, respectively) found by the simplex algorithm and averaged over 1000 experiments. Columns 3 and 6 show the average error (the error is defined as the difference between the real bound and the simplex predicted bound) over 1000 experiments.... ..."

### Table 3: Subspace iteration as implemented in sis1 and sis2. The orthogonal factorization in step(2) of Table 3 is computed by a mod- i ed Gram-Schmidt procedure. On multiprocessor architectures (especially those having hierarchical memories), one may achieve high performance (with a slight increase in the total number of arithmetic operations) by using ei- ther a block Gram-Schmidt or block Householder orthogonalization method in step(2). As discussed in [14], signi cant improvements in the algorithmic performance of fundamental linear algebra kernels may be gained through the improved data locality associated with block-based methods. For the spectral decomposition step (4), larger subspaces, an optimized implementa- tion of the classical EISPACK ([32]) pair, TRED2 and TQL2. On parallel

"... In PAGE 14: ... This particular al- gorithm incorporates both a Rayleigh-Ritz procedure and acceleration via Chebyshev polynomials. The iteration which embodies the ritzit program is given in Table3 . The Rayleigh Quotient matrix, Hk, in step (3) is essentially... ..."

### Table 3: Subspace iteration as implemented in sis1 and sis2. The orthogonal factorization in step(2) of Table 3 is computed by a mod- i ed Gram-Schmidt procedure. On multiprocessor architectures (especially those having hierarchical memories), one may achieve high performance (with a slight increase in the total number of arithmetic operations) by using ei- ther a block Gram-Schmidt or block Householder orthogonalization method in step(2). As discussed in [14], signi cant improvements in the algorithmic performance of fundamental linear algebra kernels may be gained through the improved data locality associated with block-based methods. For the spectral decomposition step (4), larger subspaces, an optimized implementa- tion of the classical EISPACK ([32]) pair, TRED2 and TQL2. On parallel

"... In PAGE 14: ... This particular al- gorithm incorporates both a Rayleigh-Ritz procedure and acceleration via Chebyshev polynomials. The iteration which embodies the ritzit program is given in Table3 . The Rayleigh Quotient matrix, Hk, in step (3) is essentially... ..."

### Table 1. Execution times and number of inferences for the examples Example Arithmetic Procedure Combined Procedure

1995

"... In PAGE 13: ...Results Table1 gives run times and garbage collection times in seconds, and the number of primitive inferences, for applications of the linear arithmetic decision proce- dure in the HOL arith library and of the combined procedure described in this paper to the following examples: 1. m n ^ :(m = n) =) SUC m n 2.... ..."

Cited by 14

### Table 3. PcFIML output of cointegration analysis via the Johansen procedure for the spot and forward exchange rate example

"... In PAGE 15: ...0118)T . The PcFIML output of the cointegrat- ing analysis is reported in Table3 whilst the graph of the linear combination of the cointegrating vector via this procedure is shown in Figure 3. The ADF test for stationarity of the linear combination of the cointegrating vec- tor obtained via the RBC procedure reveals signi cance at the 1% level of signi cance, thus inferring stationarity.... ..."

### Table 1: The Arithmetic Cost of One Loop Iteration

1994

"... In PAGE 11: ... This will certainly limit the problem size we will be able to solve. Table1 tabulates the arithmetic cost of one loop of the inverse free iteration versus the Newton iteration #283.8#29 and #283.... In PAGE 11: ...teration #283.8#29 and #283.9#29 for the real ordinary and real generalized spectral divide and conquer problems, respectively. From Table1 , we see that for the standard spectral divide and conquer problem, the one loop of the inverse free iteration does about 6.7 times more arithmetic than the one... ..."

Cited by 51

### Table 1. Features of FPGAs, Sony PlayStation 2 and GPUs. Communication refers to data transfers between the main CPU and the accelerator. Good, bad and medium are defined in terms of assistance towards obtaining high performance.

in COMPARING FPGAS TO GRAPHICS ACCELERATORS AND THE PLAYSTATION 2 USING A UNIFIED SOURCE DESCRIPTION

### Table 2: 10 ROIs from the database: original scientist labels shown with posterior probabilities estimated via the EM procedure

1995

"... In PAGE 6: ...2. As shown in Table2 , posterior probabilities for the volcanoes generally are in agreement with intuition and often correspond to taking the majority vote or the \average quot; of the C and D labels (the conservative labellers). However some p(vjl) estimates could not easily be derived by any simple averaging or voting scheme, e.... In PAGE 7: ... Even with this optimistic curve, volcano labelling is relatively inaccurate by either man or machine. Figure 2(b) shows a weighted ROC: for each of 4 scientists the probabilistic \ref- erence labels quot; were derived via the EM procedure as in Table2 from the other 3 scientists, and the detections of each scientist were scored according to each such reference set. Performance of the algorithm (the SVD-Gaussian method) was eval- uated relative to the EM-derived label estimates of all 4 scientists.... ..."

Cited by 4

### Table 2. Comparison of GPUs

2006

Cited by 1

### Table 2: General LP models for optimizing load adjustments.

1997

"... In PAGE 18: ... In general, LP models consist of a linear objective function of continuous real variables either to be minimized or maximized which are subject to a set of constraints. Table2 presents the LP models that we are proposing. They are applicable for arbitrary numbers of concurrent sessions with varying adjustment costs and adjustment pro t.... In PAGE 18: ... They are applicable for arbitrary numbers of concurrent sessions with varying adjustment costs and adjustment pro t. The general load reduction LP model (left side in Table2 ) is targeted at reducing the current system load with minimal total adjustment costs. Therefore, it has an objective function which 1 One must be aware of the phenomenon that the Simplex LP algorithm which is popular among these products exhibits exponential worst case complexity.... ..."

Cited by 5