### Table 6 Algorithmic description of decentralized optimization algorithm incorporating effi- cient field evaluation.

2001

"... In PAGE 31: ... This computation is extremely fast and results in a drastic speed-up in computation. Table6 summarizes the field-evaluation algorithm, and Figure 23 illustrates the data flow for the modified optimization algorithm. To determine the im- pact of a different heat output, a source calculates the resulting temperature change for each field node, based on influence graph edge weights.... ..."

Cited by 10

### Table 4: A comparison of running times (in seconds) averaged over 20 trials reveals that diametrical clustering is computationally effi- cient. The average number of clusters created by each algorithm is indicated in parentheses.

"... In PAGE 6: ... Note that our algorithm does not explicitly try to optimize these values, instead focusing on finding diametric gene clusters. Comparison of running time: We finally provide a com- parison of running times in Table4 averaged over 20 trials. The GeneShaving implementation was only available on S- plus software, so we did not include its running time numbers.... In PAGE 6: ... We give results for the closest number of clusters produced by CLICK. Even though we have a simple implementation of our algorithm in C++, Table4 shows that the running time is still acceptable for large datasets. In future work, we intend to optimize the speed of our implementation.... ..."

### Table 2. Averaged time computation (in seconds) of an ap- proximated regularization path with 100 samples of C vary- ing from 0.1 to 100. We compare the performance of the MKL SILP algorithm and our algorithm.

2007

"... In PAGE 7: ... We can see that for the SILP MKL algo- rithm, the computational time is somewhat linear with C, whereas using the adaptive 2-norm algorithm, the running time rapidly decreases ( when decreasing C from 100) and then reaches a steady state with small computational time. Table2 depicts the average time (over 5 runs) for computing the whole approximate regularization path with 100 samples of C. We can note that our adaptive 2-norm algorithm is more effi- cient than SILP and that the gain factor goes from 5 to 66.... ..."

Cited by 4

### Table 2. Averaged time computation (in seconds) of an ap- proximated regularization path with 100 samples of C vary- ing from 0.1 to 100. We compare the performance of the MKL SILP algorithm and our algorithm.

2007

"... In PAGE 7: ... We can see that for the SILP MKL algo- rithm, the computational time is somewhat linear with C, whereas using the adaptive 2-norm algorithm, the running time rapidly decreases ( when decreasing C from 100) and then reaches a steady state with small computational time. Table2 depicts the average time (over 5 runs) for computing the whole approximate regularization path with 100 samples of C. We can note that our adaptive 2-norm algorithm is more effi- cient than SILP and that the gain factor goes from 5 to 66.... ..."

Cited by 4

### Table 10: Index-lookup time for answering (a) the original queries and (b) suffix queries on 250MB data sets with perturbed keywords. Index lookup was effi- cient: on average it took 30.3 milliseconds for predicate queries and 281.6 milliseconds for neighborhood keyword queries. For the purpose of comparison, we also list the index-lookup time on the 25MB data.

"... In PAGE 11: ...ent up gradually from 71.2MB to 76.4MB, all roughly 5 times as much as for the original data set. Table10 shows the index-lookup time. We make three observations.... ..."

### Table 2: Results for SSWDP algorithms. R = 10;T = 10. 40z represents the minimum number of die copies diced from the wafer under the respective dicing plan.

2004

"... In PAGE 6: ...4GHz CPU. Results in Table2 convincingly show that (ILP3) is more effi- cient than (ILP1) and (ILP2). On average, CPLEX can find the optimal solution with (ILP3) 1000 times faster than with (ILP1) and over 20 times faster than with (ILP2).... ..."

Cited by 6

### Table 1. The boosting algorithm for learning a query online. T hypotheses are constructed each using a single feature. The final hypothesis is a weighted linear combination of the T hy- potheses where the weights are inversely proportional to the training errors.

2001

"... In PAGE 5: ...5. Table1 shows the learning algorithm. The weak classifiers that we use (thresholded single... In PAGE 6: ...1. Learning Discussion The algorithm described in Table1 is used to select key weak classifiers from the set of possible weak classifiers. While the AdaBoost process is quite effi- cient, the set of weak classifier is extraordinarily large.... In PAGE 10: ... The user selects the maximum acceptable rate for fi and the minimum acceptable rate for di. Each layer of the cascade is trained by AdaBoost (as described in Table1 ) with the number of features used being in- creased until the target detection and false positive rates are met for this level. The rates are determined by test- ing the current detector on a validation set.... ..."

### Table 3: Probabilistic network characteristics

"... In PAGE 18: ...2 Probabilistic networks We continue our computational study with 8 real-life probabilistic networks from the field of medicine. Table3 shows the origin of the instances and the size of the network. The most effi- cient way to compute the inference in a probabilistic network is by the use of the junction-tree propagation algorithm of Lauritzen and Spiegelhalter [24].... In PAGE 18: ... The moralization of a network (or directed graph) D = (V, A) is the undirected graph G = (V, E) obtained from D by adding edges between every pair of non-adjacent vertices that have a common successor, and then dropping arc directions. The size of the edge set E is also reported in Table3 . After the application of pre-processing... In PAGE 18: ...Table 3: Probabilistic network characteristics techniques for computing the treewidth [8], an additional four instances to conduct our heuris- tics on are available. The size of these four instances is reported in Table3 as well. After [8], henceforth we refer to these instances as instancename {3,4}.... ..."

### Table 18: The performance of predictive sampling algorithm on HDV.

1994

"... In PAGE 8: ...able 17: The performance of predictive sampling algorithm on LDV..................... 32 Table18... In PAGE 42: ....1.3 Performance of Predictive Sampling Algorithm The performance of the algorithm is expressed by the accuracy and effi- ciency statistics and the before-and-after pictures. Table 17 and Table18 show the accuracy and efficiency statistics for the predictive sampling algorithm for LDV and HDV respectively. Appendix C shows the before-and-after pictures under the algorithm.... ..."