### Table 1 Computational Experience on Large-Scale Spatial Market Problenms

1989

"... In PAGE 25: ... These problems are larger than the equilibrium problems considered in Nagurney (1987b) and of the same size as the disequilibrium problems solved for the inverse demand models in Nagurney and Zhao (1988). In Table1 we fixed the number of cross-terms in the functions (41), (42), and (43) to 5, whereas in Table 2, we fixed the number of cross-terms to 10. We set M = 0, and M = o.... ..."

### Table 2: Returns (Ri;t) On Winning Contingent Claims, Caltech Large-Scale Aggregation Experiment, May 25, 1999.

"... In PAGE 17: ... With the restriction in (1), however, one can test whether the dynamics of the transaction prices of winner contracts is consistent with the null that the market read the information (in the book) correctly, as explained above. Table2 lists, across all periods and for each period separately, (i) the average inverse return on winning contingent claims, (ii) the corresponding z-statistic. Returns were computed from one transaction to the next.... In PAGE 17: ...65, which is just signiflcant at the 10% level (two-sided test). For comparison, Table2 also displays the average of the returns themselves (not the inverse returns) and the corresponding z-statistics. Because only the returns on winning securities are measured, one expects the average to be above one.... In PAGE 17: ...1%, conflrming the overall image. Hence, the results in Table2 support the hypothesis that the market read correctly whatever information was revealed through trading activity and entries in the book. The market did make mistakes (it often failed to completely aggregate the available information), but these are to be expected even from a rational Bayesian learner who knows how to interpret signals from the book.... In PAGE 18: ... This is an awkward situation: subjects are invited to trade, but the experiment is designed such that there would be no trade. Why do subjects trade? Are they confused? The second column of Table2 indicates that there was a fair amount of trade. Per period, up to 882 transactions took place in the winning security only.... In PAGE 18: ...nvestors. An exception is [4]. (In game theory, this common prior assumption is referred to as the Harsanyi doctrine; in dynamic asset pricing theory, an even more extreme position is taken, namely that the prior belief is correct { see [36].) Absent a well-developed theory of asset pricing with disagreeing investors (\beauty contests quot;), it is hard to interpret the experimental results in Figure 5 and Table2 , just like it was not possible to fully understand the failure of information aggregation in older experiments such as those reported in [41], where payouts depended on the identity of the holder, a situation that has not been thoroughly investigated in asset pricing theory. To give subjects a reason to trade, one could have allocated a difierent number of contingent claims to difierent subjects.... ..."

### Table 1: Specification of four large-scale clusters System Name Jacquard RENCI BlueGene/L DataStar Seaborg

"... In PAGE 2: ...OS Linux Linux AIX AIX We conducted our MILC performance studies on four HPC systems: the Jacquard [3] Linux cluster at NERSC (National Energy Research Scientific Computing Center), an IBM BlueGene/L (BGL) system [4] at RENCI, the Seaborg [5] IBM POWER3 SMP cluster at NERSC, and the DataStar [6] IBM POWER4 SMP cluster at SDSC (San Diego Supercomputing Center). Table1 shows the configuration of the four systems. For these performance studies we used several performance tool sets that are well-suited for the kinds of scalability experiments we performed.... In PAGE 3: ... Performance analysis results For MILC 7.4, we conducted both weak scaling and strong scaling experiments on the four platforms listed in Table1 using Prophesy, and cache utilization studies on Jacquard using TAU. We also did computation efficiency studies for MILC 7.... ..."

### TABLE I CLASSIFICATION OF APPROACHES TO LARGE-SCALE COMPUTATION

### Table 4.7: Large-Scale Max Cut Problem

2004

### Table 4.8: Large-Scale Max Clique Problem

2004

### Table 3: Computational comparisons on the twenty large-scale instances. The number in the first row in each sell is the percentage deviation from the best solution and the number in the second row is the computing time in minutes.

2003

"... In PAGE 27: ... To obtain the results on these instances, the CIM algorithm was run on a SunBlade100 workstation with an 500 MHz CPU UltraSparcIIe processor, the RTR algorithm was run on a Pentium 100 MHz PC, GTS and XK were run on the same machines as described above. In Table3 , the results from CIM algorithm are reported for a variable number of vehicles. That is, we present the best results we obtain without assuming that there is a constraint on the number of available vehicles.... In PAGE 27: ... In the CIM Time column we present the minutes required to reach the reported solution. The results in Table3 present a mixed judgment on the performance of the CIM algorithms. On the capacity restricted problems, the CIM algorithms usually use more than the necessary number of vehicles, and find solutions that are between 98.... ..."

Cited by 4

### Table 2. Solution CPU time for the large-scale examples from the MATLAB Opti- misation Toolbox. Timings were obtained by averaging over several repetitions of 10 evaluations.

2005

"... In PAGE 14: ... As pointed out in [12], despite the gradient being dense the functions brownf and tbroy are partially value separable and allows the sparse deriva- tive computation in MAD and MSAD to be several times faster than sfd(nls), which cannot make use of the intermediate sparsity. Table2 lists the total run-time in obtaining a solution to the given optimi- sation problem with derivatives supplied using the various differentiation meth- ods. Compared to the overloading approach in fmad, a substantial saving in the total run-time can be seen when derivatives are supplied through source trans- formed code.... ..."

### Table 4. Relative sizes of large-scale semantic knowledge bases. Adapted with permis- sion from Mueller (1999)).

2004

Cited by 85

### Table 4: Comparison of computational times of a large-scale system benchmark

2005

"... In PAGE 15: ...1 seconds. In Table4 , we show the computational times when varying both Rf and Rs from one to 10 000 reactions. The largest benchmark reaction network then has 20 000 reactions and 50 000 chemical species.... ..."