Results 1 - 10
of
28,049
Table 1: DIMACS benchmarks subset.
2003
"... In PAGE 4: ... 5 Comparison of branching rules In [17], a detailed comparison between the branching rules mentioned above is presented. To validate these results we performed experiments for the same set of DIMACS benchmarks (see Table1 ) on a AMD Athlon(TM) XP1700+, restricted to 512MB main memory and 180sec of CPU runtime for each instance. Each class of the benchmark set consists of several instances1 which can be either satisfiable (see column #SAT) or unsatisfiable (see column #UNSAT).... In PAGE 8: ... Since there are several possibilities for choosing a pref- erence value initialization (3x), branching rule selection method (3x) and difference distribution mech- anism (2x) we have conducted experiments with B4BF A1 BF A1 BEB5 BP BDBK configurations. Since our approach is a randomized method (namely the application of the proposed selection methods), we handled each in- stance of the benchmark set (see Table1 ) 30 times for each of the 18 parameter settings. All experiments were performed on a AMD Athlon(TM) XP1700+, restricted to 512MB main memory and 180sec of CPU runtime for each instance.... ..."
Cited by 1
Table 1: DIMACS benchmarks subset.
2003
"... In PAGE 4: ... 5 Comparison of branching rules In [17], a detailed comparison between the branching rules mentioned above is presented. To validate these results we performed experiments for the same set of DIMACS benchmarks (see Table1 ) on a AMD Athlon(TM) XP1700+, restricted to 512MB main memory and 180sec of CPU runtime for each instance. Each class of the benchmark set consists of several instances1 which can be either satisfiable (see column #SAT) or unsatisfiable (see column #UNSAT).... In PAGE 8: ... Since there are several possibilities for choosing a pref- erence value initialization (3x), branching rule selection method (3x) and difference distribution mech- anism (2x) we have conducted experiments with B4BF A1 BF A1 BEB5 BP BDBK configurations. Since our approach is a randomized method (namely the application of the proposed selection methods), we handled each in- stance of the benchmark set (see Table1 ) 30 times for each of the 18 parameter settings. All experiments were performed on a AMD Athlon(TM) XP1700+, restricted to 512MB main memory and 180sec of CPU runtime for each instance.... ..."
Cited by 1
Table 1: DIMACS benchmarks subset.
"... In PAGE 3: ... COMPARISON OF BRANCHING RULES In [13], a detailed comparison between the braching rules mentioned above is presented. To validate this results we perfomed experiments for the same set of DIMACS bench- marks (see Table1 ) on a SUN Sparc Ultra4 with 248Mhz, restricted to 512MB main memory and 2h of CPU runtime for each instance. For each branching rule we applied GRASP to the set of benchmarks.... In PAGE 5: ... For each preference value initilization (Time-Rank, Abort-Rank, Time-Abort-Rank) the corresponding column is splitted into a column AB-Time and AB-#Aborts. Column AB-Time gives the average CPU runtimes in seconds for the whole benchmark set of Table1 . As previously mentioned we have counted only the runtimes of solved instances.... In PAGE 6: ... On the other side generating conflicts com- bined with clause recording is mandatory for complete SAT algorithms to handle unsatisfiable instances. The used bench- mark set consists of approximately the same number of sat- isfiable as well as unsatisfiable instances (see Table1 , col- umn #SAT and #UNSAT give the number of satisfiable and unsatisfiable instances, respectively). From this viewpoint the results of our approach are a little bit surprisingly par- ticularly for the unsatisfiable instances.... ..."
Table 2: nofib benchmarks: Spectral Subset
"... In PAGE 5: ... 2.3 The Spectral subset The programs in the Spectral subset of nofib|listed in Table2 |are those that don apos;t quite meet the criteria for Real programs, usually the stipulation that someone other than the author might want to run them. Many of these programs fall into Hennessy and Patterson apos;s category of \kernel quot; benchmarks, being \small, key pieces from real programs quot; [7, page 45].... ..."
Table 4: Subset of SPECint92 benchmarks.
2000
"... In PAGE 15: ...erformance. Table 3 shows the results of the Dhrystone V2.1 benchmark for Itsy and a few other systems [9, 10, 11]. Three of the SPECint92 benchmarks were also run, and the results are shown in Table4 [9]. By interpolation, Itsy runs the Dhrystone benchmark with performance similar to that of a Pen- tium P5 running at 110 MHz.... ..."
Cited by 3
Table 4: Subset of SPECint92 benchmarks.
2000
"... In PAGE 15: ...erformance. Table 3 shows the results of the Dhrystone V2.1 benchmark for Itsy and a few other systems [9, 10, 11]. Three of the SPECint92 benchmarks were also run, and the results are shown in Table4 [9]. By interpolation, Itsy runs the Dhrystone benchmark with performance similar to that of a Pen- tium P5 running at 110 MHz.... ..."
Cited by 3
Table 1: nofib benchmarks: Real Subset
"... In PAGE 4: ...ubset in a public forum (e.g., available by anonymous FTP). The programs in the Real subset are listed in Table1 . Each one meets most of the following criteria: Written in standard Haskell (version 1.... ..."
Table 1 is not intended to be used as a benchmark of different processors. As such it would not be appropriate
"... In PAGE 3: ... The decompression time was also measured and since the algorithm is reasonably symmetric, the results were consistent with the encoding performance and are omitted from this presentation. The four processor architectures chosen are listed in Table1 together with the performance measurements for each of the three implementations. The UltraSPARC, HPPA and AMD processors all have dedicated floating point units and each implements a different SIMD instruction set (VIS, MMX and MAX respectively).... ..."
Table 5: Benchmark Selector Rules
1996
"... In PAGE 18: ...Based on the figure, we can develop the two simple selector rules shown in Table5 for this benchmark access pattern. One rule detects when the prefetch parameters should be increased considerably while the other detects when the prefetch parameters should be increased slightly.... In PAGE 18: ... One rule detects when the prefetch parameters should be increased considerably while the other detects when the prefetch parameters should be increased slightly. To calibrate these rules for the Intel Paragon with a single RAID-3 disk array, we simply augment the selector table with the appropriate sensor values as shown at the bottom of the Table5 . When the calibrated selector table is used for an application that exhibits this access pattern, the steering infrastructure can detect poor PPFS server performance and increase the prefetch parameters appropriately.... In PAGE 18: ...5 4 In Figure 9b, the startup transient lasts about sixty seconds before these cache misses occur regularly. 5 The rules in Table5 are examples of a subset of the needed rules for this benchmark. A complete set of rules could also reduce the amount of prefetching performed when the sensors indicate that resources were being wasted.... ..."
Cited by 5
Table 5: Benchmark Selector Rules
1996
"... In PAGE 18: ...Based on the figure, we can develop the two simple selector rules shown in Table5 for this benchmark access pattern. One rule detects when the prefetch parameters should be increased considerably while the other detects when the prefetch parameters should be increased slightly.... In PAGE 18: ... One rule detects when the prefetch parameters should be increased considerably while the other detects when the prefetch parameters should be increased slightly. To calibrate these rules for the Intel Paragon with a single RAID-3 disk array, we simply augment the selector table with the appropriate sensor values as shown at the bottom of the Table5 . When the calibrated selector table is used for an application that exhibits this access pattern, the steering infrastructure can detect poor PPFS server performance and increase the prefetch parameters appropriately.... In PAGE 18: ...5 4 In Figure 9b, the startup transient lasts about sixty seconds before these cache misses occur regularly. 5 The rules in Table5 are examples of a subset of the needed rules for this benchmark. A complete set of rules could also reduce the amount of prefetching performed when the sensors indicate that resources were being wasted.... ..."
Cited by 5
Results 1 - 10
of
28,049