Results 1 - 10
of
16,909
Table 1: Private and public shared key formats
1999
Cited by 29
Table 5 Resource Sharing Model
"... In PAGE 16: ...Table 5 Resource Sharing Model The results presented in Table5 illustrate the substantial gain in CPU time as the number of automata is reduced, and this with relatively little impact on memory re- quirements. Furthermore, this is seen to be true even when the state space within the grouped automata are not reduced.... ..."
Table 5: Resource Sharing Model
"... In PAGE 20: ... This reduction is not possible in models with N ? 1 resources. The results presented in Table5 illustrate the substantial gain in CPU time as the number of automata is reduced, and this with relatively little impact on memory requirements. Furthermore, this is seen to be true even when the state space within the grouped automata are not reduced.... ..."
Cited by 9
Table 1: Resource Sharing Model
"... In PAGE 32: ... Furthermore, 50% of the transitions are functional, which is quite high. Table1 presents information concerning the size of the state space (both the complete product state space and the reachable state space, n); the number of nonzero elements, nz, in the sparse matrix representation; and various timing results under a variety of model parameter values. It may be observed that the reachable state space varies according to P (the number of resources that may simultaneously access the resource), from almost nothing to one less than the size of the product state space.... In PAGE 33: ...76 of the time required by the SUN. The numbers in bold type in Table1 are 0.76 times the number of seconds obtained on the SUN.... ..."
Table 6 gives the results of simulations of a split cache consisting of two 64K byte segments, versus a mixed 128K byte cache. The conventional wisdom on mixed versus split resources is that a single shared resource of a given size is always better than two private resources each of 1/2 size. This is the observed behavior for most programs, but the PC board router had better performance with a split cache than a mixed cache. This is because the external cache is a direct mapped cache, and providing separate instruction and data sections provides a measure of added associativity in the cache. In other words, with a split cache data references and instruction references that map onto each other can coexist in the cache, whereas they can thrash between each other in a mixed cache. However, for most programs, the mixed cache performed better than the split cache. This was especially true for the numeric benchmarks. These have large data sets and spend most of their time in small loops. For example, a 100x100 Linpack has an 80K byte array. This fits in a 128K byte mixed cache but does not fit in the 64K byte data side of a split cache, so its split performance is much worse than its mixed performance. Since the numeric programs spend much of their time in small loops, the external instruction cache is rarely used by the numeric benchmarks.
"... In PAGE 22: ... Table6 : Split vs. mixed external cache CPI burden Although the mixed cache clearly performs better than the split cache, in the MultiTitan we implemented the split cache.... In PAGE 35: ...List of Tables Table 1: Performance improvement with 64-bit refill 8 Table 2: Frequency of branches 10 Table 3: Frequency of MultiTitan CPU interlocks 11 Table 4: Load interlocks vs. address interlocks 13 Table 5: Improvement from 64-bit loads and stores 15 Table6 : Split vs. mixed external cache CPI burden 18... ..."
Table C3 Indicators of Private Food Assistance Resources
2005
Table 5: Determinants of Trends in Private Label Market Share
Results 1 - 10
of
16,909