### Table 7 The timings for processing one small prime in the sieve procedures for DSA prime generation

in Abstract A note on efficient implementation of prime generation algorithms in small portable devices

2004

"... In PAGE 14: ...Table7 ), the optimal SPS sets are computed for generating safe prime. C.... ..."

### Table 5. Parameters and sieving results

"... In PAGE 6: ... We repeated the precomputations for two reasons: rstly, several improvements in the linear algebra step allowed to use a bigger factor base, secondly, we wanted to nd out how e ective the double large prime variant would perform for this setting. Table5 depicts the parameters and the sieving results of 1995 (quadruple large prime variation) and 1997 (double large prime variation). Both setups produced the result we wished: a linear system the solution of which yields the logarithms of particular elements of ZZ=pZZ.... ..."

### Table 1: Run-times for the above examples A more complex example program which takes full advantage of lazy evaluation is the well-known prime-sieve of Erathostenes, which in sugared lambda notation is given in gure 4. In order to have it terminate on -Red as well, this program must be modi ed accordingly. In doing this we can take advantage of the fact that -Red internally represents the alternative terms of an if-then-else clause as components of a special construct which looks like a Consed binary list whose components are evaluated only on demand, i.e. whenever the predicate term has reduced to a Boolean constant. Thus we

### Table 5. The timings in table 5 are an average of the time to find 400 different probable primes. The timings include the overhead introduced by sending data and receiving data from the smart card, copying the input into certain destination and jump to the prime-finding routine. The average overhead is 0.39 for finding a 512-bit prime is 0.47 and for finding a 1024- bit prime.

2002

"... In PAGE 6: ...05 45.76 Table5 . Performance of the prime finding algorithms using different sieve procedures 4 RELATED WORK Although it seems simple, the prime finding algorithms are scarcely investigated.... ..."

Cited by 5

### Table 2. Sieve parallelism degree for several computation and communication granularities

"... In PAGE 9: ...Table2 presents the number of parallel tasks and inter-tasks messages required to compute the prime numbers up to 100 000, for several //task computation and communication granularities. The grain-sizes values were selected to show representatives values of the sieve execution times.... ..."

Cited by 1

### Table 4.1: The computation of the Eratosthene apos;s sieve. prime numbers. Modulo is the web where each element is the modulo of the produced number and the prime number in the same column. Zero is the web containing boolean values that are true every time that the number generated is divided by a prime number. Finally, reduced is a beta-reduction with an or operation, that is the result is true if one of the prime numbers divides the generated number. The x : jyj operator shrinks the web x to the rank speci ed by y. The rank of a collection is a vector where the ith element represents the number of elements of x in the ith dimension. Table 4.1 presents the details of the computation of the prime number following Eratosthene apos;s method.

1996

Cited by 11

### Table 1. The computation of the Eratosthene apos;s sieve.

1996

"... In PAGE 6: ...Table1 presents the details of the computation of prime numbers following Eratosthene apos;s method. generator@0 = 2; generator = $generator + 1 when Clock; extend = generator : j$sievej; modulo = extend % $sieve; zero = (modulo == (0 : jmoduloj)); reduced = ornzero; sieve@0 = generator; sieve = $sieve # generator when (not reduced); In this example, data-parallelism is found in the extension of the == operator, modulo, in reductions, etc.... ..."

Cited by 4

### Table 2. Eratosthenes apos; sieve. Prime numbers go on forever, according to Euclid, but looking carefully at Ta- ble 1 one sees that their density seems to decrease slowly, and this can be checked building larger tables, as our Table 3 below shows. Can we explain this phenom- enon? Performing the operations required by the sieve on the integers 1, 2, : : : , N, (where N is a very large number) we can observe that only about one half of the integers survive the rst step (when we deal with the prime number 2), and only 2 3 of them remain after the second step (p = 3), and so on. In other words, when dealing with the prime number p we cancel about 1=p-th of the integers in our table that have not been cancelled yet. Since every non-prime integer has at least a prime factor not exceeding its square root, at the end the proportion of surviving numbers in Table over their total should be roughly 1 ? 1

### Table 4. The speed of jc compared to optimized C { lr1gen is the core of an LR(1) parser generator; { matmult is an integer matrix multiplication program; { nrev is an O(n2) naive reverse program on an input list of length 100; { pascal is a benchmark, by E. Tick, to compute Pascal apos;s triangle; { prime computes prime numbers up to 200 using the Sieve of Eratosthenes; { qsort is a quicksort program (see Example 3.1), executed on a list of length 100;

"... In PAGE 27: ... Because of this, as shown in Table 3, returning values in registers turns out to be about 50% slower than a homogeneous memory return policy. Table4 compares the execution speed of our system with optimized C code, written in a \natural quot; C style wherever possible (i.e.... ..."

### Table 1: nfibs and sieve benchmark results for the three architectures tested. The nal column shows the speed of the inlined threaded code relative to op- timized C.

1998

"... In PAGE 6: ... The performance of two benchmarks was measured using this in- terpreter: the function-call intensive Fibonacci benchmark presented earlier (nfibs), and a memory intensive, function call free, prime number generator (sieve). Table1 shows the number of seconds required to ex- ecute these benchmarks on several architectures (133MHz Pentium, SparcStation 20, and 200MHz PowerPC 603ev). The gures shown are for a simple bytecode interpreter, the same interpreter performing translation into direct threaded code, direct threaded code with dynamic inlining of com- mon opcode sequences, and the benchmark written in C and compiled with the same optimization options (-O2) as our interpreter.... ..."

Cited by 63