### Table 1: Transcending Skinner and Taylor

"... In PAGE 3: ...Table 1: Transcending Skinner and Taylor The conceptual framework outline in Table1 provides the foundation to characterize a differentiation between school and lifelong learning, as illustrated by Table 2. Assuming that schools create mindsets about learning, teaching, and collaboration implies that there is no evidence that a big switch theory will succeed, meaning that a student who was educated as a... ..."

### Table 3: Further Evidence

"... In PAGE 12: ... In this latter group the crisis countries were 8 and the non-crisis 2. A more detailed description of the change in legislation occurred in each country and the year of enforcement is contained in Table 1 (for crisis countries), in Table 2 (for non-crisis countries), and in Table3 (for countries that were considered but not included due to lack of evidence). On the basis of the 15 selected countries we then verified the number of banks for which we have (from Bankscope) the balance sheet/profit and loss accounts in the years which precede and follow the enforcement date, as required for the empirical analysis.... ..."

### Table 2. The evaluation of the decision mechanism MAX that selects the most e ective combination of evidence on a per-query basis for TREC11 and TREC12. The rst (second) number in parentheses denotes the number of queries for which the combination of evidence X (Y) in MAX(X,Y) outperforms Y (X).

"... In PAGE 5: ... In this way, we would obtain the highest possible retrieval e ectiveness on average with the given setting. Table2 contains the results of the decision mechanism MAX(X, Y,: : :) for di erent weighting schemes and all possible pairs of the three combinations of evidence C, CA and CAU. For example, we can see that MAX(CPL2, CAPL2), which selects between C and CA, using the weighting scheme PL2,... In PAGE 6: ...mong all the possible ones. We could employ two approaches to perform this selection. First, we could use the average retrieval e ectiveness of the decision mechanism MAX, and select the combinations of evidence that would result in the highest retrieval e ectiveness. As we can see from Table2 , for the... ..."

Cited by 2

### Table 3.2: Comparison of game engines on the basis of certain criteria

2003

### Table 2. Virtual Reference Behaviors Strong No Characteristics of Virtual Librarians Evidence Evidence Evidence N/A

2005

Cited by 3

### Table 2: Field multiplication times (in s) of our implementations for F2m on an 800 MHz Intel Pentium III. Input and output are in normal basis representation for the five rightmost columns. The compilers are GNU C 2.95 (gcc) and Intel 6.0 (icc) on Linux (kernel 2.4).

2006

"... In PAGE 13: ... Wu et al. [33, Table2 ] give sample minima (for several m 2T153; 235U) for the number of consecutive coefficients of an R-element that will permit recovery of the associated field element. Experimentally, times for Algorithm 9 for m D 163 on an Intel Pentium III are a factor 7 slower than field multiplication for a polynomial basis representation.... In PAGE 14: ... The implementation here has received limited such tuning for gcc. Table2 shows the running times from our implementation. The fastest times show that Algorithm 7 is 13% to 29% faster than the other direct multiplication algorithms for the entries with T 4, and competitive for T D 2.... In PAGE 15: ... For point operations involving only field addition, multiplication, and squaring, a polynomial-based squaring operation is sufficiently fast relative to multiplication that the squarings are typically ignored in rough estimates of point operation cost. The times in Table2 are significantly faster than in earlier papers, and suggest (at least on this platform) that multiplication for Gaussian normal bases is much closer in performance to multiplication in a poly- nomial basis than previously believed. While the difference is still sufficiently large to discourage the use of normal bases for traditional elliptic curve point operations of addition and doubling, we consider the implications for Koblitz curves and methods based on point halving.... In PAGE 16: ... Point addition requires 8 multiplications (assuming mixed coordinates). Regardless of method (basis conversion, direct, or ring mapping), Table2 suggests that the added costs of normal basis multiplication in point addition will overwhelm the relatively small savings in squarings. Point halving Halving-based methods [17, 28] replace most point doubles by a potentially faster halving operation.... ..."

Cited by 4

### Table 4 shows the number of CLBs required to imple- ment certain basic functions. Although our discus- sion gives calculations on a per-hardware-structure basis, these estimates compare well with CLB es- timates given by automated synthesis from VHDL down to Xilinx parts.

1996

"... In PAGE 9: ... Table4 : CLB counts for common logic functions. Using that basic information as a starting point, Table 5 gives a breakdown of how much hardware is required for the victim cache and prefetch bu er described in the previous section.... ..."

Cited by 1

### Table 2. minconf Rep. Basis Struct. Basis Red. Basis

2001

"... In PAGE 2: ... Experimental Results and Comparisons An algorithm for computing the representative basis is implemented. In Table2 are the numbers of rules in the representative basis, the structural basis [5], and the reduced basis [2] of the dataset Mushrooms (BKBDBEBG objects coded on 128 items, http://kdd.... ..."

Cited by 1

### Table 7. Summary: Certain Strategies Require Certain Message Attributes (Proposition 5) and Certain Message Attributes Induce Certain Strategies (Proposition 6)

"... In PAGE 29: ... Proposition 6B: Senders will adapt to low message organization by increasing control through testing and adjusting, provided media interactivity is high. Comprehensive View of Message Table7 summarizes links between strategies and individual attributes of a message. We also examined the interactions among message attri- butes, but found little evidence of such inter- actions.... ..."

### Table 2: Basis Memory Requirements (Before Reduction) Scene Transfer Size Poses (Na) Basis Data

2003

"... In PAGE 13: ...Data Compression Before dimensionality reduction via principal component analysis, storing the appearance space basis vectors consumes a significant amount of storage. As shown by Table2 , the datasets are too large to fit in system memory of a common PC, and far exceed the video memory limitations of commodity graphics hardware. Table 2: Basis Memory Requirements (Before Reduction) Scene Transfer Size Poses (Na) Basis Data... ..."