### Table 1. Summary of the complexity results (GI means graph isomorphism).

"... In PAGE 13: ... 4. Conclusions The main results of this paper are on the one hand the graph isomorphism com- pleteness of the general (combinatorial) polytope isomorphism problem and, on the other hand, the fact that this problem can be solved in polynomial time if the dimensions of the polytopes are bounded by a constant (see Table1 for an overview of the complexity results and Fig. 6 for a sketch of the complexity theoretic land- scape considered in this paper).... In PAGE 14: ... It may be that one can turn our algorithm into a computer code that becomes compatible with nauty for checking combinatorial polytope isomorphism. The remaining two open entries in Table1 concern the complexity of the graph isomorphism problem restricted to graphs of arbitrary (or simplicial) polytopes of bounded dimensions. A polynomial time algorithm for this problem would perhaps not be as interesting as the potential result that the problem is graph isomorphism complete, because the latter result would show that the class of graphs of polytopes... ..."

Cited by 4

### Table 1. Summary of the complexity results (GI means graph isomorphism).

"... In PAGE 13: ... 4. Conclusions The main results of this paper are on the one hand the graph isomorphism com- pleteness of the general (combinatorial) polytope isomorphism problem and, on the other hand, the fact that this problem can be solved in polynomial time if the dimensions of the polytopes are bounded by a constant (see Table1 for an overview of the complexity results and Fig. 6 for a sketch of the complexity theoretic land- scape considered in this paper).... In PAGE 14: ... It may be that one can turn our algorithm into a computer code that becomes compatible with nauty for checking combinatorial polytope isomorphism. The remaining two open entries in Table1 concern the complexity of the graph isomorphism problem restricted to graphs of arbitrary (or simplicial) polytopes of bounded dimensions. A polynomial time algorithm for this problem would perhaps not be as interesting as the potential result that the problem is graph isomorphism complete, because the latter result would show that the class of graphs of polytopes... ..."

Cited by 4

### Table 1. Summary of the complexity results (GI means graph isomorphism).

"... In PAGE 13: ... 4. Conclusions The main results of this paper are on the one hand the graph isomorphism com- pleteness of the general (combinatorial) polytope isomorphism problem and, on the other hand, the fact that this problem can be solved in polynomial time if the dimensions of the polytopes are bounded by a constant (see Table1 for an overview of the complexity results and Fig. 6 for a sketch of the complexity theoretic land- scape considered in this paper).... In PAGE 14: ... It may be that one can turn our algorithm into a computer code that becomes compatible with nauty for checking combinatorial polytope isomorphism. The remaining two open entries in Table1 concern the complexity of the graph isomorphism problem restricted to graphs of arbitrary (or simplicial) polytopes of bounded dimensions. A polynomial time algorithm for this problem would perhaps not be as interesting as the potential result that the problem is graph isomorphism complete, because the latter result would show that the class of graphs of polytopes... ..."

Cited by 4

### Table 1: Composing linear relations with linear relations in branching time.

"... In PAGE 36: ...elations can be de ned for the interval algebra of branching time (siehe Fig. 8 10). In the second part, we present the conceptual neighborhood graph and the composition table for interval relations in branching time (siehe Fig. 11 and Table1 ) and discuss their relationships to the linear time versions. Finally, we discuss some complexity results concerning constraint solving problems of interval relations in branching time.... ..."

### Table 1 Results of isomorphism tests on random binary graphs

2004

"... In PAGE 17: ... Both algorithms are implemented in C/C++. Table1 summarizes the results. The results of ERE and HCN are roughly equivalent.... In PAGE 17: ...able 1 summarizes the results. The results of ERE and HCN are roughly equivalent. Both algorithms terminated with correct results on all trials. The errors of MFA, shown in Table1 as bracketed numbers, are all caused by exceeding the given time- limit of 10; 000 iterations. The erroneous results of SD and RHO indicate that solving the maximum clique problem in Step 3 of NGI is not a trivial task.... ..."

### Table 2. Inexact isomorphism detection

1998

"... In PAGE 3: ... All times given in tables are in milliseconds, and are the average of a number of executions. Table2 outlines performance for inexact queries over the database. Each inexact algorithm was used to solve four queries, two of which had exact solutions, and two inexact.... ..."

Cited by 3

### Table 5.9. Comparison of the computational effort of Algorithm 3.1 with and without the

1987

Cited by 27

### Table 3. Solutions after substitution by using isomorphism

"... In PAGE 11: ... somewhere here After substituting the events of the retrieved cases shown in Figure 4 by matching events indicated by the isomorphisms, four solutions can be obtained for the new problem (see Table 3). Insert Table3 . somewhere here It can be seen that there are 3 violations of soft constraints in solution 1: SpanishA is consecutive to Physics, Physics is held only 2 times and Maths is scheduled one more time.... ..."

### Table 5: Benchmark results. [At the time of writing a few problems marked ? were being sorted out]

1993

"... In PAGE 10: ... This allows for two semi-spaces with a two-space copying garbage collector of 2 Mbyte, 4Mbyte etc. Table5 shows compile time and run time performance measurements. The compilation speed is reported in lines per minute real time.... In PAGE 10: ...enchmarks. For each executable we report the best time out of 50 5=250 runs. Fixing the heap size to the same value for all experiments shows somewhat larger execution times, but the relative ranking of the compilers does not change. Each row in Table5 bears one asterisk, which marks the best result for that particular row. This shows that it depends to some extent on the application which compiler generates the fastest code, but in general, Clean and FAST produce the fastest code.... ..."

Cited by 35

### Table 5.7: CPU times relative to the RCM algorithm.

1997

"... In PAGE 13: ...aximumdegree is 1. Here high values of W2 lead to small envelope parameters. Note that the bandwidth follows the same trend as the rest of the envelope parameters, unlike the rst class. Other problems from Table5 that belong to this class are: FORD1, FORD2, SKIRT, NASARB, BCSSTK30, and FINANCE256. All other problems belong to the rst class.... In PAGE 15: ... The RCM algorithm uses a fast pseudo-diameter algorithm described by Du , Reid, and Scott [11]. For the eighteen matrices in Table5 , the mean time of the ArraySloan was 11:3 times that of RCM, while the median time was 8:2 that of RCM. However, the mean cost of the HeapSloan was only 2:5 times of RCM, with the median cost only 2:3.... In PAGE 20: ... When we refer to the Sloan algorithm with- out mentioning the weights, we mean the algorithm with normalized weights. We have compared the quality and time requirements of these algorithms on eigh- teen problems (see Table5 ). The problems are chosen to represent a variety of application areas: structural analysis, uid dynamics, and linear programs from stochastic optimization and multicommodity ows.... In PAGE 20: ... The problems are chosen to represent a variety of application areas: structural analysis, uid dynamics, and linear programs from stochastic optimization and multicommodity ows. The complete set of results for RCM are shown in Table5 ; for other algorithms, results normalized with... In PAGE 21: ...4: Maximum wavefront sizes relative to the RCM algorithm. A comparison of the mean performance of the various algorithms is included in Table5 . The CPU time for only one of the Sloan algorithms is shown because the two algorithms have identical running times since they di er only in the choice of weights.... In PAGE 21: ... This is because the larger ratios in the normalized data strongly in uence the arithmetic mean. The reader can compute the unnormalized data from the results for RCM included in Table5 and the tables with the normalized data.Initially we discuss the results on the uncompressed graphs, since most of the graphs in our test collection did not gain much from compression.... ..."

Cited by 20