### Table V shows the average ROC scores for detecting 56 classes of RNA when applying an attribute-only kernel, a symbolic-only kernel, a linear fusion kernel, and a super-kernel fusion (the fourth column). The average ROC score of super-kernel fusion (the fourth column) for 56 classes is around 5% higher than that of the attribute-only kernel (the first column), 4% higher than that of the symbolic-only kernel (the second column), and 2% higher than that of the linear fusion model (the third column).

2005

### Table 2: Number of iterations for first linear solve and total nonlinear solve

2000

"... In PAGE 11: ... Thus, the efficiency e(P) on P processors (and nP unknowns as defined above) can be effectively represented as e(P) Iterations(1) Iterations(P) P f(1) f(P) f(P)=sec P f(1)=sec = eI s eF s ec with the number of iterations Iterations(P), flops iteration f(P), and flop rate f(P)=sec. Iterations(P) is tabulated in Table2 , P f(1)=sec f(P)=sec is shown if Figure 11 (left), and f(P)=sec P f(1)=sec shown if Figure 11 (right). This paper focuses on parallel efficiency but a few words about uniprocessor efficiency eu are warranted.... In PAGE 13: ... We see super-linear efficiency in the solve times (eg, the solve times are decreasing as the problem size increases) in Figures 10, for two reasons. First, we have super-linear convergence rates (ie, eI s gt; 1:0), as shown in Table2 . Second, the vertices added in each successive scale problem have a higher percentage of interior vertices than the base problem,... In PAGE 15: ... Over 24% of the integration points, in the hard shells, are in the yield state at the final configuration. Figure 13 (right), and Table2 , show the number of multigriditerationsin each linear solve of each of the ten Newton solves, stacked on top of each other and color coded for each Newton iteration. From this data we can see that the total number of iterationsis stayingabout constant as the scale of the problem increases.... In PAGE 16: ...2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 Percentage of quot;hard quot; shell integration points in plastic state quot;time quot; step percent plastic integration points 70K dof 623K dof 2,086K dof 4,925K dof 9,595K dof 16,554K dof 26,554K dof 39,161K dof 21648 120 240 400 640 960 0 500 1000 1500 2000 2500 3000 3500 4000 4500 Solver iterations in 10 Newton nonlinear solves Number of PEs linear solver iterations Figure 13: Percent of integration points in hard material that are in plastic state in each time step; Number of iterations for all solves in nonlinear problem (see Table 2) of the first step decreases as the problem size increases, as is shown in Table2 . That is, we are seeing a slight growth in the number of Newton iterations required, and the average number of iterations in the linear solver is not decreasing as dramatically as in the first linear solve.... In PAGE 16: ... That is, we are seeing a slight growth in the number of Newton iterations required, and the average number of iterations in the linear solver is not decreasing as dramatically as in the first linear solve. Table2 shows the detailed iteration count data from these experiments. 8 Conclusion We have developed a promising method for solving the linear set of equations arising from implicit finite element ap- plications.... ..."

Cited by 3

### Table 3 and Figure 2, use of the custom memory manager has alleviated much of the overhead associated with multi-threaded dynamic memory allocation. Several interesting trends are revealed in Table 3. First, several runs resulted in super-linear speedups. This is an indirect result of the poor performance of multi-threaded dynamic memory allocation. The use of the custom memory manager has an unexpected bene t in that the overhead of numerous calls to malloc (i.e., operator new) is entirely eliminated. This is illustrated, for example, by the fact that the parallel version of cSpace which employs the custom memory manager executes approximately 35% faster than the serial version when both are executed on a single processor using the CR input set. Figure 2 con rms that cSpace scales well up to 16 processors with the given input sets. The e ciency of the computation is calculated as Sp=p where p is the number of processors participating in the computation. As can be seen from the gure, cSpace achieved 100% e ciency for all runs made using the DR data set.

"... In PAGE 12: ...49 0:39:22 16.01 Table3 : Wall-clock Execution Times and Speedups for cS pace Table 3 summarizes the performance and scalability of cSpace across all data sets using the custom memory manager. For each data set, the serial version of cSpace was executed on one processor, and the parallelized version on 2, 4, 8, and 16 processors in order to determine the scalability of the application.... In PAGE 12: ...49 0:39:22 16.01 Table 3: Wall-clock Execution Times and Speedups for cS pace Table3 summarizes the performance and scalability of cSpace across all data sets using the custom memory manager. For each data set, the serial version of cSpace was executed on one processor, and the parallelized version on 2, 4, 8, and 16 processors in order to determine the scalability of the application.... ..."

### Table VI shows that the average ROC score of super-kernel fusion (the fourth column) for detecting three classes of suspicious events is around 15% higher than that of the attribute-only kernel (the first column), 9% higher than that of the symbolic-only kernel (the second column), and 4% higher than that of the linear fusion model (the third column).

2005

### Table 1. The analyzed super-repositories

"... In PAGE 2: ... Although most of the discourse can be generalized to any of these reposi- tory types, in this article we focus our attention on the first category and look at three open-source and one in- dustrial super-repositories which contain each the his- tory of several dozens to hundreds of applications written in Smalltalk. In Table1 we provide a brief numerical overview of these repositories. The oldest and largest of them is the Open Smalltalk Repository hosted by Cincom3.... In PAGE 2: ... The last one is a repository maintained by the company Soops BV, located in the Netherlands. The data provided in Table1 needs to be considered with care as the numbers are the result of a simple project counting in the repositories; however super-repositories accumulate junk over time, as certain projects fail, die off, short-time experiments are performed, etc. This is inherent to the na- ture of super-repositories, and actually only adds to the in- sight that super-repositories need to be understood in more... ..."

### Table 4. CPU times for rst factorization with GSPAR and SuperLU GSPAR SuperLU

"... In PAGE 9: ...) SPEEDUP 1 1 7 008 7 516 BOP 1 1 5 089 5 120 BOP with type 1 1 18 5 814 5 870 BOP with type 2 1 18 4 932 4 967 BOP with type 1 6 18 6 208 1 904 BOP with type 2 6 18 5 140 1 371 The linear solver described in Section 3 is realized in the package GSPAR, which is integrated in the simulation package BOP. In Table4 the perfor- mance of GSPAR is compared to that of SuperLU [7] regarding to the rst factorization (pivoting and factorization) of coe cient matrices of linear sys- tems resulting from real life dynamic process simulation of chemical plants. Table 4.... ..."

### Table 4. CPU times for rst factorization with GSPAR and SuperLU

946

"... In PAGE 9: ...) SPEEDUP 1 1 7008 7516 BOP 1 1 5089 5120 BOP with type 1 1 18 5814 5870 BOP with type 2 1 18 4932 4967 BOP with type 1 6 18 6208 1904 BOP with type 2 6 18 5140 1371 The linear solver described in Section 3 is realized in the package GSPAR, which is integrated in the simulation package BOP. In Table4 the perfor- mance of GSPAR is compared to that of SuperLU [7] regarding to the rst factorization (pivoting and factorization) of coe cient matrices of linear sys- tems resulting from real life dynamic process simulation of chemical plants. Table 4.... ..."

### Table 3. Our linear cost function versus SuperLU, on a PIII, 1Gb, 1GHz, (timings in seconds)

2002

Cited by 2

### Table 2. Algorithm for finding the super-node myState := Super_Head n := number of my neighbors c := myID while (myState is Super_Head and c is not 0) c := c -1

"... In PAGE 5: ...To find the node which has the maximum number of followers, we suggest a method as illustrated in Table2 . In the first stage, state of every node is considered as a super-head.... ..."

Cited by 1